23 resultados para Lipschitz selections
Resumo:
Background: A major challenge for assessing students’ conceptual understanding of STEM subjects is the capacity of assessment tools to reliably and robustly evaluate student thinking and reasoning. Multiple-choice tests are typically used to assess student learning and are designed to include distractors that can indicate students’ incomplete understanding of a topic or concept based on which distractor the student selects. However, these tests fail to provide the critical information uncovering the how and why of students’ reasoning for their multiple-choice selections. Open-ended or structured response questions are one method for capturing higher level thinking, but are often costly in terms of time and attention to properly assess student responses. Purpose: The goal of this study is to evaluate methods for automatically assessing open-ended responses, e.g. students’ written explanations and reasoning for multiple-choice selections. Design/Method: We incorporated an open response component for an online signals and systems multiple-choice test to capture written explanations of students’ selections. The effectiveness of an automated approach for identifying and assessing student conceptual understanding was evaluated by comparing results of lexical analysis software packages (Leximancer and NVivo) to expert human analysis of student responses. In order to understand and delineate the process for effectively analysing text provided by students, the researchers evaluated strengths and weakness for both the human and automated approaches. Results: Human and automated analyses revealed both correct and incorrect associations for certain conceptual areas. For some questions, that were not anticipated or included in the distractor selections, showing how multiple-choice questions alone fail to capture the comprehensive picture of student understanding. The comparison of textual analysis methods revealed the capability of automated lexical analysis software to assist in the identification of concepts and their relationships for large textual data sets. We also identified several challenges to using automated analysis as well as the manual and computer-assisted analysis. Conclusions: This study highlighted the usefulness incorporating and analysing students’ reasoning or explanations in understanding how students think about certain conceptual ideas. The ultimate value of automating the evaluation of written explanations is that it can be applied more frequently and at various stages of instruction to formatively evaluate conceptual understanding and engage students in reflective
Resumo:
A fractional FitzHugh–Nagumo monodomain model with zero Dirichlet boundary conditions is presented, generalising the standard monodomain model that describes the propagation of the electrical potential in heterogeneous cardiac tissue. The model consists of a coupled fractional Riesz space nonlinear reaction-diffusion model and a system of ordinary differential equations, describing the ionic fluxes as a function of the membrane potential. We solve this model by decoupling the space-fractional partial differential equation and the system of ordinary differential equations at each time step. Thus, this means treating the fractional Riesz space nonlinear reaction-diffusion model as if the nonlinear source term is only locally Lipschitz. The fractional Riesz space nonlinear reaction-diffusion model is solved using an implicit numerical method with the shifted Grunwald–Letnikov approximation, and the stability and convergence are discussed in detail in the context of the local Lipschitz property. Some numerical examples are given to show the consistency of our computational approach.
Resumo:
In the structural health monitoring (SHM) field, long-term continuous vibration-based monitoring is becoming increasingly popular as this could keep track of the health status of structures during their service lives. However, implementing such a system is not always feasible due to on-going conflicts between budget constraints and the need of sophisticated systems to monitor real-world structures under their demanding in-service conditions. To address this problem, this paper presents a comprehensive development of a cost-effective and flexible vibration DAQ system for long-term continuous SHM of a newly constructed institutional complex with a special focus on the main building. First, selections of sensor type and sensor positions are scrutinized to overcome adversities such as low-frequency and low-level vibration measurements. In order to economically tackle the sparse measurement problem, a cost-optimized Ethernet-based peripheral DAQ model is first adopted to form the system skeleton. A combination of a high-resolution timing coordination method based on the TCP/IP command communication medium and a periodic system resynchronization strategy is then proposed to synchronize data from multiple distributed DAQ units. The results of both experimental evaluations and experimental–numerical verifications show that the proposed DAQ system in general and the data synchronization solution in particular work well and they can provide a promising cost-effective and flexible alternative for use in real-world SHM projects. Finally, the paper demonstrates simple but effective ways to make use of the developed monitoring system for long-term continuous structural health evaluation as well as to use the instrumented building herein as a multi-purpose benchmark structure for studying not only practical SHM problems but also synchronization related issues.
Resumo:
Purpose Director selection is an important yet under-researched topic. The purpose of this paper is to contribute to extant literature by gaining a greater understanding into how and why new board members are recruited. Design/methodology/approach This exploratory study uses in-depth interviews with Australian non-executive directors to identify what selection criteria are deemed most important when selecting new director candidates and how selection practices vary between organisations. Findings The findings indicate that appointments to the board are based on two key attributes: first, the candidates’ ability to contribute complementary skills and second, the candidates’ ability to work well with the existing board. Despite commonality in these broad criteria, board selection approaches vary considerably between organisations. As a result, some boards do not adequately assess both criteria when appointing a new director hence increasing the chance of a mis-fit between the position and the appointed director. Research limitations/implications The study highlights the importance of both individual technical capabilities and social compatibility in director selections. The authors introduce a new perspective through which future research may consider director selection: fit. Originality/value The in-depth analysis of the director selection process highlights some less obvious and more nuanced issues surrounding directors’ appointment to the board. Recurrent patterns indicate the need for both technical and social considerations. Hence the study is a first step in synthesising the current literature and illustrates the need for a multi-theoretical approach in future director selection research.
Resumo:
Assessing students’ conceptual understanding of technical content is important for instructors as well as students to learn content and apply knowledge in various contexts. Concept inventories that identify possible misconceptions through validated multiple-choice questions are helpful in identifying a misconception that may exist, but do not provide a meaningful assessment of why they exist or the nature of the students’ understanding. We conducted a case study with undergraduate students in an electrical engineering course by testing a validated multiple-choice response concept inventory that we augmented with a component for students to provide written explanations for their multiple-choice selection. Results revealed that correctly chosen multiple-choice selections did not always match correct conceptual understanding for question testing a specific concept. The addition of a text-response to multiple-choice concept inventory questions provided an enhanced and meaningful assessment of students’ conceptual understanding and highlighted variables associated with current concept inventories or multiple choice questions.
Resumo:
This creative work is an original soundtrack for the multimedia performance adaptation of Oscar Wilde’s De Profundis, led by David Fenton and Brian Lucas and produced by Metro Arts. Intermediality offers unique challenges to the composer creating towards live performance. Given the text-based nature of the piece, and the prevalence of screen content, music had a distinct role to play in supporting the intermedial performance environment. Drawing from Oscar Wilde’s own writings in the initial stages [“...richer cadences…more curious effects” “…the cry of Marysas” and the “deferred resolution of Chopin”] , the deliberately risky compositional process experimented with improvised location recordings and found sounds, random and fragmented assemblages of vintage recordings, rough methods and obsolete recording technology, and the sonic kinship of the hissing sibilances of the sea, theatrical applause and the crackle of antique recording devices (which had just been invented in Wilde’s time) worked into wefts of sound. As the soundtrack emerged, is was clearly resistant to ‘concepts’ imposed from the outside, and as the field of possibilities expanded and engaged in dialogue with the other elements of the performance (live and projected) certain pieces were selected by the director and curated into the emerging work. Thus leitmotifs emerged, rather than being imposed from the outset, with a particular through line holding: if it was too obviously like ‘music’, (which is usually used in theatre as emotional lubrication and narrative signpost) it didn’t work, and if it sounded like avant-garde sound-art, it was too grating and detracted from the primacy of the text. As a composer I worked this sweet spot inbetween these two poles as well as serving David Fenton’s curation: he determined which compositions to incorporate, reiterate and omit as part of the process of writing text, action and image and the compositional process responded with organic elaborations and variations on these selections. Musical resolution was mostly deferred until the closing stages of the performance. The soundtrack was present for the duration of the show, and Artshub reviewed the musical component thus: “...the score by David Megarrity is a refined, understated ambient scaffolding.” It premiered at the Visy Theatre, Brisbane Powerhouse, on 22 April 2015.
Resumo:
Concept inventory tests are one method to evaluate conceptual understanding and identify possible misconceptions. The multiple-choice question format, offering a choice between a correct selection and common misconceptions, can provide an assessment of students' conceptual understanding in various dimensions. Misconceptions of some engineering concepts exist due to a lack of mental frameworks, or schemas, for these types of concepts or conceptual areas. This study incorporated an open textual response component in a multiple-choice concept inventory test to capture written explanations of students' selections. The study's goal was to identify, through text analysis of student responses, the types and categorizations of concepts in these explanations that had not been uncovered by the distractor selections. The analysis of the textual explanations of a subset of the discrete-time signals and systems concept inventory questions revealed that students have difficulty conceptually explaining several dimensions of signal processing. This contributed to their inability to provide a clear explanation of the underlying concepts, such as mathematical concepts. The methods used in this study evaluate students' understanding of signals and systems concepts through their ability to express understanding in written text. This may present a bias for students with strong written communication skills. This study presents a framework for extracting and identifying the types of concepts students use to express their reasoning when answering conceptual questions.
Resumo:
The most difficult operation in flood inundation mapping using optical flood images is to map the ‘wet’ areas where trees and houses are partly covered by water. This can be referred to as a typical problem of the presence of mixed pixels in the images. A number of automatic information extracting image classification algorithms have been developed over the years for flood mapping using optical remote sensing images, with most labelling a pixel as a particular class. However, they often fail to generate reliable flood inundation mapping because of the presence of mixed pixels in the images. To solve this problem, spectral unmixing methods have been developed. In this thesis, methods for selecting endmembers and the method to model the primary classes for unmixing, the two most important issues in spectral unmixing, are investigated. We conduct comparative studies of three typical spectral unmixing algorithms, Partial Constrained Linear Spectral unmixing, Multiple Endmember Selection Mixture Analysis and spectral unmixing using the Extended Support Vector Machine method. They are analysed and assessed by error analysis in flood mapping using MODIS, Landsat and World View-2 images. The Conventional Root Mean Square Error Assessment is applied to obtain errors for estimated fractions of each primary class. Moreover, a newly developed Fuzzy Error Matrix is used to obtain a clear picture of error distributions at the pixel level. This thesis shows that the Extended Support Vector Machine method is able to provide a more reliable estimation of fractional abundances and allows the use of a complete set of training samples to model a defined pure class. Furthermore, it can be applied to analysis of both pure and mixed pixels to provide integrated hard-soft classification results. Our research also identifies and explores a serious drawback in relation to endmember selections in current spectral unmixing methods which apply fixed sets of endmember classes or pure classes for mixture analysis of every pixel in an entire image. However, as it is not accurate to assume that every pixel in an image must contain all endmember classes, these methods usually cause an over-estimation of the fractional abundances in a particular pixel. In this thesis, a subset of adaptive endmembers in every pixel is derived using the proposed methods to form an endmember index matrix. The experimental results show that using the pixel-dependent endmembers in unmixing significantly improves performance.