877 resultados para Classify


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Much has been said and documented about the key role that reflection can play in the ongoing development of e-portfolios, particularly e-portfolios utilised for teaching and learning. A review of e-portfolio platforms reveals that a designated space for documenting and collating personal reflections is a typical design feature of both open source and commercial off-the-shelf software. Further investigation of tools within e-portfolio systems for facilitating reflection reveals that, apart from enabling personal journalism through blogs or other writing, scaffolding tools that encourage the actual process of reflection are under-developed. Investigation of a number of prominent e-portfolio projects also reveals that reflection, while presented as critically important, is often viewed as an activity that takes place after a learning activity or experience and not intrinsic to it. This paper assumes an alternative, richer conception of reflection: a process integral to a wide range of activities associated with learning, such as inquiry, communication, editing, analysis and evaluation. Such a conception is consistent with the literature associated with ‘communities of practice’, which is replete with insight into ‘learning through doing’, and with a ‘whole minded’ approach to inquiry. Thus, graduates who are ‘reflective practitioners’ who integrate reflection into their learning will have more to offer a prospective employer than graduates who have adopted an episodic approach to reflection. So, what kinds of tools might facilitate integrated reflection? This paper outlines a number of possibilities for consideration and development. Such tools do not have to be embedded within e-portfolio systems, although there are benefits in doing so. In order to inform future design of e-portfolio systems this paper presents a faceted model of knowledge creation that depicts an ‘ecology of knowing’ in which interaction with, and the production of, learning content is deepened through the construction of well-formed questions of that content. In particular, questions that are initiated by ‘why’ are explored because they are distinguished from the other ‘journalist’ questions (who, what, when, where, and where) in that answers to them demand explanative, as opposed to descriptive, content. They require a rationale. Although why questions do not belong to any one genre and are not simple to classify — responses can contain motivational, conditional, causal, and/or existential content — they do make a difference in the acquisition of understanding. The development of scaffolding that builds on why-questioning to enrich learning is the motivation behind the research that has informed this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Theory predicts that efficiency prevails on credence goods markets if customers are able to verify which quality they receive from an expert seller. In a series of experiments with endogenous prices we observe that variability fails to result in efficient provision behavior and leads to very similar results as a setting without variability. Some sellers always provide appropriate treatment even if own money maximization calls for over- or undertreatment. Overall our endogenous price-results suggests that both inequality aversion and a taste for efficiency play an important role for experts provision behavior. We contrast the implications of those two motivations theoretically and discriminate between them empirically using a �xed-price design. We then classify experimental experts according to their provision behavior.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A rule-based approach for classifying previously identified medical concepts in the clinical free text into an assertion category is presented. There are six different categories of assertions for the task: Present, Absent, Possible, Conditional, Hypothetical and Not associated with the patient. The assertion classification algorithms were largely based on extending the popular NegEx and Context algorithms. In addition, a health based clinical terminology called SNOMED CT and other publicly available dictionaries were used to classify assertions, which did not fit the NegEx/Context model. The data for this task includes discharge summaries from Partners HealthCare and from Beth Israel Deaconess Medical Centre, as well as discharge summaries and progress notes from University of Pittsburgh Medical Centre. The set consists of 349 discharge reports, each with pairs of ground truth concept and assertion files for system development, and 477 reports for evaluation. The system’s performance on the evaluation data set was 0.83, 0.83 and 0.83 for recall, precision and F1-measure, respectively. Although the rule-based system shows promise, further improvements can be made by incorporating machine learning approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates the determinants of China’s regional innovation capacity (RIC) and variations in these determinants between different types of regions. Based on the framework of national innovation capacity (NIC) and research on innovation system, this paper develops a framework of RIC in the Chinese context. Using panel data from 1991 to 2009, clustering analysis is first employed to classify regions according to their innovation development path. Panel data regressions with fixed effect model are conducted to explore the determinants of RIC and how these vary across the different regional clusters. We find that the 30 regions can be clustered into three groups, and there are considerable differences in the drivers of RIC between these different regional groups.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Accurate and detailed road models play an important role in a number of geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance systems. In this thesis, an integrated approach for the automatic extraction of precise road features from high resolution aerial images and LiDAR point clouds is presented. A framework of road information modeling has been proposed, for rural and urban scenarios respectively, and an integrated system has been developed to deal with road feature extraction using image and LiDAR analysis. For road extraction in rural regions, a hierarchical image analysis is first performed to maximize the exploitation of road characteristics in different resolutions. The rough locations and directions of roads are provided by the road centerlines detected in low resolution images, both of which can be further employed to facilitate the road information generation in high resolution images. The histogram thresholding method is then chosen to classify road details in high resolution images, where color space transformation is used for data preparation. After the road surface detection, anisotropic Gaussian and Gabor filters are employed to enhance road pavement markings while constraining other ground objects, such as vegetation and houses. Afterwards, pavement markings are obtained from the filtered image using the Otsu's clustering method. The final road model is generated by superimposing the lane markings on the road surfaces, where the digital terrain model (DTM) produced by LiDAR data can also be combined to obtain the 3D road model. As the extraction of roads in urban areas is greatly affected by buildings, shadows, vehicles, and parking lots, we combine high resolution aerial images and dense LiDAR data to fully exploit the precise spectral and horizontal spatial resolution of aerial images and the accurate vertical information provided by airborne LiDAR. Objectoriented image analysis methods are employed to process the feature classiffcation and road detection in aerial images. In this process, we first utilize an adaptive mean shift (MS) segmentation algorithm to segment the original images into meaningful object-oriented clusters. Then the support vector machine (SVM) algorithm is further applied on the MS segmented image to extract road objects. Road surface detected in LiDAR intensity images is taken as a mask to remove the effects of shadows and trees. In addition, normalized DSM (nDSM) obtained from LiDAR is employed to filter out other above-ground objects, such as buildings and vehicles. The proposed road extraction approaches are tested using rural and urban datasets respectively. The rural road extraction method is performed using pan-sharpened aerial images of the Bruce Highway, Gympie, Queensland. The road extraction algorithm for urban regions is tested using the datasets of Bundaberg, which combine aerial imagery and LiDAR data. Quantitative evaluation of the extracted road information for both datasets has been carried out. The experiments and the evaluation results using Gympie datasets show that more than 96% of the road surfaces and over 90% of the lane markings are accurately reconstructed, and the false alarm rates for road surfaces and lane markings are below 3% and 2% respectively. For the urban test sites of Bundaberg, more than 93% of the road surface is correctly reconstructed, and the mis-detection rate is below 10%.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The deterioration of air quality is a significant issue in large and growing cities. This work investigates particulate emissions from transport, the largest source of air pollution in cities today. Emitters such as busy roads and diesel trains are investigated, with specific reference to the evolution of particles over time and distance. Diesel trains are investigated as an alternative to road traffic in investigating evolutionary processes. Higher emissions and solitary sources mean that the emitted plume can be observed over time in a single location. These results represent the first investigation of the evolution of fine and ultrafine aerosol particles from this type of source. Aerosols near a busy road are investigated, with the result that a dependence of total number concentration on distance from the road is shown to be related to the fragmentation of nanoparticle clusters. Local meteorological conditions are also monitored and humidity is shown to vary with distance from the road in a nonmonotonic way. Particles from a busy road were also examined using a scanning electron microscope, with the intention of understanding the make up of the emitted aerosol plume. It was determined that due to significant surface behaviour post-deposition, this method of analysis could not directly classify airborne pollutants. Some interesting results were obtained however, particularly in terms of composite particles and the analysis of deposited patterns. This thesis introduces new work in terms of the analysis of diesel train particulate emissions, as well as adding further evidence towards the fragmentation process of aerosol evolution in both background concentrations and emitted aerosol plumes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of text classification techniques has been largely promoted in the past decade due to the increasing availability and widespread use of digital documents. Usually, the performance of text classification relies on the quality of categories and the accuracy of classifiers learned from samples. When training samples are unavailable or categories are unqualified, text classification performance would be degraded. In this paper, we propose an unsupervised multi-label text classification method to classify documents using a large set of categories stored in a world ontology. The approach has been promisingly evaluated by compared with typical text classification methods, using a real-world document collection and based on the ground truth encoded by human experts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A total histological grade does not necessarily distinguish between different manifestations of cartilage damage or degeneration. An accurate and reliable histological assessment method is required to separate normal and pathological tissue within a joint during treatment of degenerative joint conditions and to sub-classify the latter in meaningful ways. The Modified Mankin method may be adaptable for this purpose. We investigated how much detail may be lost by assigning one composite score/grade to represent different degenerative components of the osteoarthritic condition. We used four ovine injury models (sham surgery, anterior cruciate ligament/medial collateral ligament instability, simulated anatomic anterior cruciate ligament reconstruction and meniscal removal) to induce different degrees and potentially 'types' (mechanisms) of osteoarthritis. Articular cartilage was systematically harvested, prepared for histological examination and graded in a blinded fashion using a Modified Mankin grading method. Results showed that the possible permutations of cartilage damage were significant and far more varied than the current intended use that histological grading systems allow. Of 1352 cartilage specimens graded, 234 different manifestations of potential histological damage were observed across 23 potential individual grades of the Modified Mankin grading method. The results presented here show that current composite histological grading may contain additional information that could potentially discern different stages or mechanisms of cartilage damage and degeneration in a sheep model. This approach may be applicable to other grading systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

KLK15 over-expression is reported to be a significant predictor of reduced progression-free survival and overall survival in ovarian cancer. Our aim was to analyse the KLK15 gene for putative functional single nucleotide polymorphisms (SNPs) and assess the association of these and KLK15 HapMap tag SNPs with ovarian cancer survival. Results In silico analysis was performed to identify KLK15 regulatory elements and to classify potentially functional SNPs in these regions. After SNP validation and identification by DNA sequencing of ovarian cancer cell lines and aggressive ovarian cancer patients, 9 SNPs were shortlisted and genotyped using the Sequenom iPLEX Mass Array platform in a cohort of Australian ovarian cancer patients (N = 319). In the Australian dataset we observed significantly worse survival for the KLK15 rs266851 SNP in a dominant model (Hazard Ratio (HR) 1.42, 95% CI 1.02-1.96). This association was observed in the same direction in two independent datasets, with a combined HR for the three studies of 1.16 (1.00-1.34). This SNP lies 15bp downstream of a novel exon and is predicted to be involved in mRNA splicing. The mutant allele is also predicted to abrogate an HSF-2 binding site. Conclusions We provide evidence of association for the SNP rs266851 with ovarian cancer survival. Our results provide the impetus for downstream functional assays and additional independent validation studies to assess the role of KLK15 regulatory SNPs and KLK15 isoforms with alternative intracellular functional roles in ovarian cancer survival.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The discovery of protein variation is an important strategy in disease diagnosis within the biological sciences. The current benchmark for elucidating information from multiple biological variables is the so called “omics” disciplines of the biological sciences. Such variability is uncovered by implementation of multivariable data mining techniques which come under two primary categories, machine learning strategies and statistical based approaches. Typically proteomic studies can produce hundreds or thousands of variables, p, per observation, n, depending on the analytical platform or method employed to generate the data. Many classification methods are limited by an n≪p constraint, and as such, require pre-treatment to reduce the dimensionality prior to classification. Recently machine learning techniques have gained popularity in the field for their ability to successfully classify unknown samples. One limitation of such methods is the lack of a functional model allowing meaningful interpretation of results in terms of the features used for classification. This is a problem that might be solved using a statistical model-based approach where not only is the importance of the individual protein explicit, they are combined into a readily interpretable classification rule without relying on a black box approach. Here we incorporate statistical dimension reduction techniques Partial Least Squares (PLS) and Principal Components Analysis (PCA) followed by both statistical and machine learning classification methods, and compared them to a popular machine learning technique, Support Vector Machines (SVM). Both PLS and SVM demonstrate strong utility for proteomic classification problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traversability maps are a global spatial representation of the relative difficulty in driving through a local region. These maps support simple optimisation of robot paths and have been very popular in path planning techniques. Despite the popularity of these maps, the methods for generating global traversability maps have been limited to using a-priori information. This paper explores the construction of large scale traversability maps for a vehicle performing a repeated activity in a bounded working environment, such as a repeated delivery task.We evaluate the use of vehicle power consumption, longitudinal slip, lateral slip and vehicle orientation to classify the traversability and incorporate this into a map generated from sparse information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper develops a framework for classifying term dependencies in query expansion with respect to the role terms play in structural linguistic associations. The framework is used to classify and compare the query expansion terms produced by the unigram and positional relevance models. As the unigram relevance model does not explicitly model term dependencies in its estimation process it is often thought to ignore dependencies that exist between words in natural language. The framework presented in this paper is underpinned by two types of linguistic association, namely syntagmatic and paradigmatic associations. It was found that syntagmatic associations were a more prevalent form of linguistic association used in query expansion. Paradoxically, it was the unigram model that exhibited this association more than the positional relevance model. This surprising finding has two potential implications for information retrieval models: (1) if linguistic associations underpin query expansion, then a probabilistic term dependence assumption based on position is inadequate for capturing them; (2) the unigram relevance model captures more term dependency information than its underlying theoretical model suggests, so its normative position as a baseline that ignores term dependencies should perhaps be reviewed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Flash Event (FE) represents a period of time when a web-server experiences a dramatic increase in incoming traffic, either following a newsworthy event that has prompted users to locate and access it, or as a result of redirection from other popular web or social media sites. This usually leads to network congestion and Quality-of-Service (QoS) degradation. These events can be mistaken for Distributed Denial-of-Service (DDoS) attacks aimed at disrupting the server. Accurate detection of FEs and their distinction from DDoS attacks is important, since different actions need to be undertaken by network administrators in these two cases. However, lack of public domain FE datasets hinders research in this area. In this paper we present a detailed study of flash events and classify them into three broad categories. In addition, the paper describes FEs in terms of three key components: the volume of incoming traffic, the related source IP-addresses, and the resources being accessed. We present such a FE model with minimal parameters and use publicly available datasets to analyse and validate our proposed model. The model can be used to generate different types of FE traffic, closely approximating real-world scenarios, in order to facilitate research into distinguishing FEs from DDoS attacks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The practitioner lawyer of the past had little need to reflect on process. The doctrinal research methodology developed intuitively within the common law — a research method at the core of practice. There was no need to justify or classify it within a broader research framework. Modern academic lawyers are facing a different situation. At a time when competition for limited research funds is becoming more intense, and in which interdisciplinary work is highly valued and non-lawyers are involved in the assessment of grant applications, lawyer-applicants who engage in doctrinal research need to be able to explain their methodology more clearly. Doctrinal scholars need to be more open and articulate about their methods. These methods may be different in different contexts. This paper examines the doctrinal method used in legal research and its place in recent research dialogue. Some commentators are of the view that the doctrinal method is simply scholarship rather than a separate research methodology. Richard Posner even suggests that law is ‘not a field with a distinct methodology, but an amalgam of applied logic, rhetoric, economics and familiarity with a specialized vocabulary and a particular body of texts, practices, and institutions ...’.1 Therefore, academic lawyers are beginning to realise that the doctrinal research methodology needs clarification for those outside the legal profession and that a discussion about the standing and place of doctrinal research compared to other methodologies is required.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent research has proposed Neo-Piagetian theory as a useful way of describing the cognitive development of novice programmers. Neo-Piagetian theory may also be a useful way to classify materials used in learning and assessment. If Neo-Piagetian coding of learning resources is to be useful then it is important that practitioners can learn it and apply it reliably. We describe the design of an interactive web-based tutorial for Neo-Piagetian categorization of assessment tasks. We also report an evaluation of the tutorial's effectiveness, in which twenty computer science educators participated. The average classification accuracy of the participants on each of the three Neo-Piagetian stages were 85%, 71% and 78%. Participants also rated their agreement with the expert classifications, and indicated high agreement (91%, 83% and 91% across the three Neo-Piagetian stages). Self-rated confidence in applying Neo-Piagetian theory to classifying programming questions before and after the tutorial were 29% and 75% respectively. Our key contribution is the demonstration of the feasibility of the Neo-Piagetian approach to classifying assessment materials, by demonstrating that it is learnable and can be applied reliably by a group of educators. Our tutorial is freely available as a community resource.