866 resultados para textual complexity assessment
Resumo:
Damage assessment (damage detection, localization and quantification) in structures and appropriate retrofitting will enable the safe and efficient function of the structures. In this context, many Vibration Based Damage Identification Techniques (VBDIT) have emerged with potential for accurate damage assessment. VBDITs have achieved significant research interest in recent years, mainly due to their non-destructive nature and ability to assess inaccessible and invisible damage locations. Damage Index (DI) methods are also vibration based, but they are not based on the structural model. DI methods are fast and inexpensive compared to the model-based methods and have the ability to automate the damage detection process. DI method analyses the change in vibration response of the structure between two states so that the damage can be identified. Extensive research has been carried out to apply the DI method to assess damage in steel structures. Comparatively, there has been very little research interest in the use of DI methods to assess damage in Reinforced Concrete (RC) structures due to the complexity of simulating the predominant damage type, the flexural crack. Flexural cracks in RC beams distribute non- linearly and propagate along all directions. Secondary cracks extend more rapidly along the longitudinal and transverse directions of a RC structure than propagation of existing cracks in the depth direction due to stress distribution caused by the tensile reinforcement. Simplified damage simulation techniques (such as reductions in the modulus or section depth or use of rotational spring elements) that have been extensively used with research on steel structures, cannot be applied to simulate flexural cracks in RC elements. This highlights a big gap in knowledge and as a consequence VBDITs have not been successfully applied to damage assessment in RC structures. This research will address the above gap in knowledge and will develop and apply a modal strain energy based DI method to assess damage in RC flexural members. Firstly, this research evaluated different damage simulation techniques and recommended an appropriate technique to simulate the post cracking behaviour of RC structures. The ABAQUS finite element package was used throughout the study with properly validated material models. The damaged plasticity model was recommended as the method which can correctly simulate the post cracking behaviour of RC structures and was used in the rest of this study. Four different forms of Modal Strain Energy based Damage Indices (MSEDIs) were proposed to improve the damage assessment capability by minimising the numbers and intensities of false alarms. The developed MSEDIs were then used to automate the damage detection process by incorporating programmable algorithms. The developed algorithms have the ability to identify common issues associated with the vibration properties such as mode shifting and phase change. To minimise the effect of noise on the DI calculation process, this research proposed a sequential order of curve fitting technique. Finally, a statistical based damage assessment scheme was proposed to enhance the reliability of the damage assessment results. The proposed techniques were applied to locate damage in RC beams and slabs on girder bridge model to demonstrate their accuracy and efficiency. The outcomes of this research will make a significant contribution to the technical knowledge of VBDIT and will enhance the accuracy of damage assessment in RC structures. The application of the research findings to RC flexural members will enable their safe and efficient performance.
Resumo:
Higher-order thinking has featured persistently in the reform agenda for science education. The intended curriculum in various countries sets out aspirational statements for the levels of higher-order thinking to be attained by students. This study reports the extent to which chemistry examinations from four Australian states align and facilitate the intended higher-order thinking skills stipulated in curriculum documents. Through content analysis, the curriculum goals were identified for each state and compared to the nature of question items in the corresponding examinations. Categories of higher-order thinking were adapted from the OECD’s PISA Science test to analyze question items. There was considerable variation in the extent to which the examinations from the states supported the curriculum intent of developing and assessing higher-order thinking. Generally, examinations that used a marks-based system tended to emphasize lower-order thinking, with a greater distribution of marks allocated for lower-order thinking questions. Examinations associated with a criterion-referenced examination tended to award greater credit for higher-order thinking questions. The level of complexity of chemistry was another factor that limited the extent to which examination questions supported higher-order thinking. Implications from these findings are drawn for the authorities responsible for designing curriculum and assessment procedures and for teachers.
Resumo:
Objectives The intent of this paper is in the examination of health IT implementation processes – the barriers to and facilitators of successful implementation, identification of a beginning set of implementation best practices, the identification of gaps in the health IT implementation body of knowledge, and recommendations for future study and application. Methods A literature review resulted in the identification of six health IT related implementation best practices which were subsequently debated and clarified by participants attending the NI2012 Research Post Conference held in Montreal in the summer of 2012. Using the framework for implementation research (CFIR) to guide their application, the six best practices were applied to two distinct health IT implementation studies to assess their applicability. Results Assessing the implementation processes from two markedly diverse settings illustrated both the challenges and potentials of using standardized implementation processes. In support of what was discovered in the review of the literature, “one size fits all” in health IT implementation is a fallacy, particularly when global diversity is added into the mix. At the same time, several frameworks show promise for use as “scaffolding” to begin to assess best practices, their distinct dimensions, and their applicability for use. Conclusions Health IT innovations, regardless of the implementation setting, requires a close assessment of many dimensions. While there is no “one size fits all”, there are commonalities and best practices that can be blended, adapted, and utilized to improve the process of implementation. This paper examines health IT implementation processes and identifies a beginning set of implementation best practices, which could begin to address gaps in the health IT implementation body of knowledge.
Resumo:
Current routine cell culture techniques are only poorly suited to capture the physiological complexity of tumor microenvironments, wherein tumor cell function is affected by intricate three-dimensional (3D), integrin-dependent cell-cell and cell-extracellular matrix (ECM) interactions. 3D cell cultures allow the investigation of cancer-associated proteases like kallikreins as they degrade ECM proteins and alter integrin signaling, promoting malignant cell behaviors. Here, we employed a hydrogel microwell array platform to probe using a high-throughput mode how ovarian cancer cell aggregates of defined size form and survive in response to the expression of kallikreins and treatment with paclitaxel, by performing microscopic, quantitative image, gene and protein analyses dependent on the varying microwell and aggregate sizes. Paclitaxel treatment increased aggregate formation and survival of kallikrein-expressing cancer cells and levels of integrins and integrin-related factors. Cancer cell aggregate formation was improved with increasing aggregate size, thereby reducing cell death and enhancing integrin expression upon paclitaxel treatment. Therefore, hydrogel microwell arrays are a powerful tool to screen the viability of cancer cell aggregates upon modulation of protease expression, integrin engagement and anti-cancer treatment providing a micro-scaled yet high-throughput technique to assess malignant progression and drug-resistance.
Resumo:
This paper contributes to conversations about the funding and quality of education research. The paper proceeds in two parts. Part I sets the context by presenting an historical analysis of funding allocations made to Education research through the ARC’s Discovery projects scheme between the years 2002 and 2014, and compares these trends to allocations made to another field within the Social, Behavioural and Economic Sciences assessment panel: Psychology and Cognitive Science. Part II highlights the consequences of underfunding education research by presenting evidence from an Australian Research Council Discovery project that is tracking the experiences of disaffected students who are referred to behaviour schools. The re-scoping decisions that became necessary and the incidental costs that accrue from complications that occur in the field are illustrated and discussed through vignettes of research with “ghosts” who don’t like school but who do like lollies, chess and Lego.
Resumo:
Aim The assessment of treatment plans is an important component in the education of radiation therapists. The establishment of a grade for a plan is currently based on subjective assessment of a range of criteria. The automation of assessment could provide a number of advantages including faster feedback, reduced chance of human error, and simpler aggregation of past results. Method A collection of treatments planned by a cohort of 27 second year radiation therapy students were selected for quantitative evaluation. Treatment sites included the bladder, cervix, larynx, parotid and prostate, although only the larynx plans had been assessed in detail. The plans were designed with the Pinnacle system and exported using the DICOM framework. Assessment criteria included beam arrangement optimisation, volume contouring, target dose coverage and homogeneity, and organ-at-risk sparing. The in-house Treatment and Dose Assessor (TADA) software1 was evaluated for suitability in assisting with the quantitative assessment of these plans. Dose volume data were exported in per-student and per-structure data tables, along with beam complexity metrics, dose volume histograms, and reports on naming conventions. Results The treatment plans were exported and processed using TADA, with the processing of all 27 plans for each treatment site taking less than two minutes. Naming conventions were successfully checked against a reference protocol. Significant variations between student plans were found. Correlation with assessment feedback was established for the larynx plans. Conclusion The data generated could be used to inform the selection of future assessment criteria, monitor student development, and provide useful feedback to the students. The provision of objective, quantitative evaluations of plan quality would be a valuable addition to not only radiotherapy education programmes but also for staff development and potentially credentialing methods. New functionality within TADA developed for this work could be applied clinically to, for example, evaluate protocol compliance.
Resumo:
Engineers must have deep and accurate conceptual understanding of their field and Concept inventories (CIs) are one method of assessing conceptual understanding and providing formative feedback. Current CI tests use Multiple Choice Questions (MCQ) to identify misconceptions and have undergone reliability and validity testing to assess conceptual understanding. However, they do not readily provide the diagnostic information about students’ reasoning and therefore do not effectively point to specific actions that can be taken to improve student learning. We piloted the textual component of our diagnostic CI on electrical engineering students using items from the signals and systems CI. We then analysed the textual responses using automated lexical analysis software to test the effectiveness of these types of software and interviewed the students regarding their experience using the textual component. Results from the automated text analysis revealed that students held both incorrect and correct ideas for certain conceptual areas and provided indications of student misconceptions. User feedback also revealed that the inclusion of the textual component is helpful to students in assessing and reflecting on their own understanding.
Resumo:
1.Marine ecosystems provide critically important goods and services to society, and hence their accelerated degradation underpins an urgent need to take rapid, ambitious and informed decisions regarding their conservation and management. 2.The capacity, however, to generate the detailed field data required to inform conservation planning at appropriate scales is limited by time and resource consuming methods for collecting and analysing field data at the large scales required. 3.The ‘Catlin Seaview Survey’, described here, introduces a novel framework for large-scale monitoring of coral reefs using high-definition underwater imagery collected using customized underwater vehicles in combination with computer vision and machine learning. This enables quantitative and geo-referenced outputs of coral reef features such as habitat types, benthic composition, and structural complexity (rugosity) to be generated across multiple kilometre-scale transects with a spatial resolution ranging from 2 to 6 m2. 4.The novel application of technology described here has enormous potential to contribute to our understanding of coral reefs and associated impacts by underpinning management decisions with kilometre-scale measurements of reef health. 5.Imagery datasets from an initial survey of 500 km of seascape are freely available through an online tool called the Catlin Global Reef Record. Outputs from the image analysis using the technologies described here will be updated on the online repository as work progresses on each dataset. 6.Case studies illustrate the utility of outputs as well as their potential to link to information from remote sensing. The potential implications of the innovative technologies on marine resource management and conservation are also discussed, along with the accuracy and efficiency of the methodologies deployed.
Resumo:
Objective To examine the clinical utility of the Cornell Scale for Depression in Dementia (CSDD) in nursing homes. Setting 14 nursing homes in Sydney and Brisbane, Australia. Participants 92 residents with a mean age of 85 years. Measurements Consenting residents were assessed by care staff for depression using the CSDD as part of their routine assessment. Specialist clinicians conducted assessment of depression using the Semi-structured Clinical Diagnostic Interview for DSM-IV-TR Axis I Disorders for residents without dementia or the Provisional Diagnostic Criteria for Depression in Alzheimer Disease for residents with dementia to establish expert clinical diagnoses of depression. The diagnostic performance of the staff completed CSDD was analyzed against expert diagnosis using receiver operating characteristic (ROC) curves. Results The CSDD showed low diagnostic accuracy, with areas under the ROC curve being 0.69, 0.68 and 0.70 for the total sample, residents with dementia and residents without dementia, respectively. At the standard CSDD cutoff score, the sensitivity and specificity were 71% and 59% for the total sample, 69% and 57% for residents with dementia, and 75% and 61% for residents without dementia. The Youden index (for optimizing cut-points) suggested different depression cutoff scores for residents with and without dementia. Conclusion When administered by nursing home staff the clinical utility of the CSDD is highly questionable in identifying depression. The complexity of the scale, the time required for collecting relevant information, and staff skills and knowledge of assessing depression in older people must be considered when using the CSDD in nursing homes.
Resumo:
Objective To synthesise recent research on the use of machine learning approaches to mining textual injury surveillance data. Design Systematic review. Data sources The electronic databases which were searched included PubMed, Cinahl, Medline, Google Scholar, and Proquest. The bibliography of all relevant articles was examined and associated articles were identified using a snowballing technique. Selection criteria For inclusion, articles were required to meet the following criteria: (a) used a health-related database, (b) focused on injury-related cases, AND used machine learning approaches to analyse textual data. Methods The papers identified through the search were screened resulting in 16 papers selected for review. Articles were reviewed to describe the databases and methodology used, the strength and limitations of different techniques, and quality assurance approaches used. Due to heterogeneity between studies meta-analysis was not performed. Results Occupational injuries were the focus of half of the machine learning studies and the most common methods described were Bayesian probability or Bayesian network based methods to either predict injury categories or extract common injury scenarios. Models were evaluated through either comparison with gold standard data or content expert evaluation or statistical measures of quality. Machine learning was found to provide high precision and accuracy when predicting a small number of categories, was valuable for visualisation of injury patterns and prediction of future outcomes. However, difficulties related to generalizability, source data quality, complexity of models and integration of content and technical knowledge were discussed. Conclusions The use of narrative text for injury surveillance has grown in popularity, complexity and quality over recent years. With advances in data mining techniques, increased capacity for analysis of large databases, and involvement of computer scientists in the injury prevention field, along with more comprehensive use and description of quality assurance methods in text mining approaches, it is likely that we will see a continued growth and advancement in knowledge of text mining in the injury field.
Resumo:
In life cycle assessment studies, greenhouse gas (GHG) emissions from direct land-use change have been estimated to make a significant contribution to the global warming potential of agricultural products. However, these estimates have a high uncertainty due to the complexity of data requirements and difficulty in attribution of land-use change. This paper presents estimates of GHG emissions from direct land-use change from native woodland to grazing land for two beef production regions in eastern Australia, which were the subject of a multi-impact life cycle assessment study for premium beef production. Spatially- and temporally consistent datasets were derived for areas of forest cover and biomass carbon stocks using published remotely sensed tree-cover data and regionally applicable allometric equations consistent with Australia's national GHG inventory report. Standard life cycle assessment methodology was used to estimate GHG emissions and removals from direct land-use change attributed to beef production. For the northern-central New South Wales region of Australia estimates ranged from a net emission of 0.03 t CO2-e ha-1 year-1 to net removal of 0.12 t CO2-e ha-1 year-1 using low and high scenarios, respectively, for sequestration in regrowing forests. For the same period (1990-2010), the study region in southern-central Queensland was estimated to have net emissions from land-use change in the range of 0.45-0.25 t CO2-e ha-1 year-1. The difference between regions reflects continuation of higher rates of deforestation in Queensland until strict regulation in 2006 whereas native vegetation protection laws were introduced earlier in New South Wales. On the basis of liveweight produced at the farm-gate, emissions from direct land-use change for 1990-2010 were comparable in magnitude to those from other on-farm sources, which were dominated by enteric methane. However, calculation of land-use change impacts for the Queensland region for a period starting 2006, gave a range from net emissions of 0.11 t CO2-e ha-1 year-1 to net removals of 0.07 t CO2-e ha-1 year-1. This study demonstrated a method for deriving spatially- and temporally consistent datasets to improve estimates for direct land-use change impacts in life cycle assessment. It identified areas of uncertainty, including rates of sequestration in woody regrowth and impacts of land-use change on soil carbon stocks in grazed woodlands, but also showed the potential for direct land-use change to represent a net sink for GHG.
Resumo:
Monitoring pedestrian and cyclists movement is an important area of research in transport, crowd safety, urban design and human behaviour assessment areas. Media Access Control (MAC) address data has been recently used as potential information for extracting features from people’s movement. MAC addresses are unique identifiers of WiFi and Bluetooth wireless technologies in smart electronics devices such as mobile phones, laptops and tablets. The unique number of each WiFi and Bluetooth MAC address can be captured and stored by MAC address scanners. MAC addresses data in fact allows for unannounced, non-participatory, and tracking of people. The use of MAC data for tracking people has been focused recently for applying in mass events, shopping centres, airports, train stations etc. In terms of travel time estimation, setting up a scanner with a big value of antenna’s gain is usually recommended for highways and main roads to track vehicle’s movements, whereas big gains can have some drawbacks in case of pedestrian and cyclists. Pedestrian and cyclists mainly move in built distinctions and city pathways where there is significant noises from other fixed WiFi and Bluetooth. Big antenna’s gains will cover wide areas that results in scanning more samples from pedestrians and cyclists’ MAC device. However, anomalies (such fixed devices) may be captured that increase the complexity and processing time of data analysis. On the other hand, small gain antennas will have lesser anomalies in the data but at the cost of lower overall sample size of pedestrian and cyclist’s data. This paper studies the effect of antenna characteristics on MAC address data in terms of travel-time estimation for pedestrians and cyclists. The results of the empirical case study compare the effects of small and big antenna gains in order to suggest optimal set up for increasing the accuracy of pedestrians and cyclists’ travel-time estimation.
Resumo:
Concept inventory tests are one method to evaluate conceptual understanding and identify possible misconceptions. The multiple-choice question format, offering a choice between a correct selection and common misconceptions, can provide an assessment of students' conceptual understanding in various dimensions. Misconceptions of some engineering concepts exist due to a lack of mental frameworks, or schemas, for these types of concepts or conceptual areas. This study incorporated an open textual response component in a multiple-choice concept inventory test to capture written explanations of students' selections. The study's goal was to identify, through text analysis of student responses, the types and categorizations of concepts in these explanations that had not been uncovered by the distractor selections. The analysis of the textual explanations of a subset of the discrete-time signals and systems concept inventory questions revealed that students have difficulty conceptually explaining several dimensions of signal processing. This contributed to their inability to provide a clear explanation of the underlying concepts, such as mathematical concepts. The methods used in this study evaluate students' understanding of signals and systems concepts through their ability to express understanding in written text. This may present a bias for students with strong written communication skills. This study presents a framework for extracting and identifying the types of concepts students use to express their reasoning when answering conceptual questions.