957 resultados para learning platform


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Agricultural pests are responsible for millions of dollars in crop losses and management costs every year. In order to implement optimal site-specific treatments and reduce control costs, new methods to accurately monitor and assess pest damage need to be investigated. In this paper we explore the combination of unmanned aerial vehicles (UAV), remote sensing and machine learning techniques as a promising technology to address this challenge. The deployment of UAVs as a sensor platform is a rapidly growing field of study for biosecurity and precision agriculture applications. In this experiment, a data collection campaign is performed over a sorghum crop severely damaged by white grubs (Coleoptera: Scarabaeidae). The larvae of these scarab beetles feed on the roots of plants, which in turn impairs root exploration of the soil profile. In the field, crop health status could be classified according to three levels: bare soil where plants were decimated, transition zones of reduced plant density and healthy canopy areas. In this study, we describe the UAV platform deployed to collect high-resolution RGB imagery as well as the image processing pipeline implemented to create an orthoimage. An unsupervised machine learning approach is formulated in order to create a meaningful partition of the image into each of the crop levels. The aim of the approach is to simplify the image analysis step by minimizing user input requirements and avoiding the manual data labeling necessary in supervised learning approaches. The implemented algorithm is based on the K-means clustering algorithm. In order to control high-frequency components present in the feature space, a neighbourhood-oriented parameter is introduced by applying Gaussian convolution kernels prior to K-means. The outcome of this approach is a soft K-means algorithm similar to the EM algorithm for Gaussian mixture models. The results show the algorithm delivers decision boundaries that consistently classify the field into three clusters, one for each crop health level. The methodology presented in this paper represents a venue for further research towards automated crop damage assessments and biosecurity surveillance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Predicting clinical response to anticancer drugs remains a major challenge in cancer treatment. Emerging reports indicate that the tumour microenvironment and heterogeneity can limit the predictive power of current biomarker-guided strategies for chemotherapy. Here we report the engineering of personalized tumour ecosystems that contextually conserve the tumour heterogeneity, and phenocopy the tumour microenvironment using tumour explants maintained in defined tumour grade-matched matrix support and autologous patient serum. The functional response of tumour ecosystems, engineered from 109 patients, to anticancer drugs, together with the corresponding clinical outcomes, is used to train a machine learning algorithm; the learned model is then applied to predict the clinical response in an independent validation group of 55 patients, where we achieve 100% sensitivity in predictions while keeping specificity in a desired high range. The tumour ecosystem and algorithm, together termed the CANScript technology, can emerge as a powerful platform for enabling personalized medicine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerous observations in clinical and preclinical studies indicate that the developing brain is particular sensitive to lead (Pb)'s pernicious effects. However, the effect of gestation-only Pb exposure on cognitive functions at maturation has not been studied. We investigated the potential effects of three levels of Pb exposure (low, middle, and high Pb: 0.03%, 0.09%, and 0.27% of lead acetate-containing diets) at the gestational period on the spatial memory of young adult offspring by Morris water maze spatial learning and fixed location/visible platform tasks. Our results revealed that three levels of Pb exposure significantly impaired memory retrieval in male offspring, but only female offspring at low levels of Pb exposure showed impairment of memory retrieval. These impairments were not due to the gross disturbances in motor performance and in vision because these animals performed the fixed location/visible platform task as well as controls, indicating that the specific aspects of spatial learning/memory were impaired. These results suggest that exposure to Pb during the gestational period is sufficient to cause long-term learning/memory deficits in young adult offspring. (C) 2003 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently, no available pathological or molecular measures of tumor angiogenesis predict response to antiangiogenic therapies used in clinical practice. Recognizing that tumor endothelial cells (EC) and EC activation and survival signaling are the direct targets of these therapies, we sought to develop an automated platform for quantifying activity of critical signaling pathways and other biological events in EC of patient tumors by histopathology. Computer image analysis of EC in highly heterogeneous human tumors by a statistical classifier trained using examples selected by human experts performed poorly due to subjectivity and selection bias. We hypothesized that the analysis can be optimized by a more active process to aid experts in identifying informative training examples. To test this hypothesis, we incorporated a novel active learning (AL) algorithm into FARSIGHT image analysis software that aids the expert by seeking out informative examples for the operator to label. The resulting FARSIGHT-AL system identified EC with specificity and sensitivity consistently greater than 0.9 and outperformed traditional supervised classification algorithms. The system modeled individual operator preferences and generated reproducible results. Using the results of EC classification, we also quantified proliferation (Ki67) and activity in important signal transduction pathways (MAP kinase, STAT3) in immunostained human clear cell renal cell carcinoma and other tumors. FARSIGHT-AL enables characterization of EC in conventionally preserved human tumors in a more automated process suitable for testing and validating in clinical trials. The results of our study support a unique opportunity for quantifying angiogenesis in a manner that can now be tested for its ability to identify novel predictive and response biomarkers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper tells the story of how a set of university lectures developed during the last six years. The idea is to show how (1) content, (2) communication and (3) assessment have evolved in steps which are named “generations of web learning”. The reader is offered a stepwise description of both didactic foundations of university lectures and practical implementation on a widely available web platform. The relative weight of directive elements has gradually decreased through the “three generations”, whereas characteristics of self-responsibility and self-guided learning have gained in importance. -Content was in early times presented and expected to be learned but in later phases expected to be constructed for examples of case studies. -Communication meant in early phases to deliver assignments to the lecturer but later on to form teams, exchange standpoints and review mutually. -Assessment initially consisted in marks invented and added up by the lecturer but was later enriched by peer review, mutual grading and voting procedures. How much “added value” can the web provide for teaching, training and learning? Six years of experience suggest: mainly insofar as new (collaborative and selfdirected) didactic scenarios are implemented! (DIPF/Orig.)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is concerned with several of the most important aspects of Competence-Based Learning (CBL): course authoring, assignments, and categorization of learning content. The latter is part of the so-called Bologna Process (BP) and can effectively be supported by integrating knowledge resources like, e.g., standardized skill and competence taxonomies into the target implementation approach, aiming at making effective use of an open integration architecture while fostering the interoperability of hybrid knowledge-based e-learning solutions. Modern scenarios ask for interoperable software solutions to seamlessly integrate existing e-learning infrastructures and legacy tools with innovative technologies while being cognitively efficient to handle. In this way, prospective users are enabled to use them without learning overheads. At the same time, methods of Learning Design (LD) in combination with CBL are getting more and more important for production and maintenance of easy to facilitate solutions. We present our approach of developing a competence-based course-authoring and assignment support software. It is bridging the gaps between contemporary Learning Management Systems (LMS) and established legacy learning infrastructures by embedding existing resources via Learning Tools Interoperability (LTI). Furthermore, the underlying conceptual architecture for this integration approach will be explained. In addition, a competence management structure based on knowledge technologies supporting standardized skill and competence taxonomies will be introduced. The overall goal is to develop a software solution which will not only flawlessly merge into a legacy platform and several other learning environments, but also remain intuitively usable. As a proof of concept, the so-called platform independent conceptual architecture model will be validated by a concrete use case scenario.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article the authors explore and evaluate developments in the use of information and communications technologies (ICT) within social work education at Queen's University Belfast since the inception of the new degree in social work. They look at the staff development strategy utilised to increase teacher confidence and competence in use of the Queen's Online virtual learning environment tools as well as the student experience of participation in modules involving online discussions. The authors conclude that the project provided further opportunity to reflect on how ICT can be used as a platform to support a whole course in a systematic and coordinated way and to ensure all staff remained abreast of ongoing developments in the use of ICT to support learning which is a normative expectation of students entering universities. A very satisfying outcome for the leaders is our observation of the emergence of other 'experts' in different aspects of use of ICT amongst the staff team. This project also shows that taking a team as opposed to an individual approach can be particularly beneficial

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional business models in the aerospace industry are based on a conventional supplier to customer relationship based on the design, manufacture and subsequent delivery of the physical product. Service provision, from the manufacturer's perspective, is typically limited to the supply of procedural documentation and the provision of spare parts to the end user as the product passes through the latter stages of its intended lifecycle. Challenging economic and political conditions have resulted in end users re-structuring their core business activities, particularly in the defence sector. This has resulted in the need for original equipment manufacturers (OEMs) to integrate and manage support service activities in partnership with the customer to deliver platform availability. This improves the probability of commercial sustainability for the OEM through shared operational risks while reducing the cost of platform ownership for the customer. The need for OEMs to evolve their design, manufacture and supply strategies by focusing on customer requirements has revealed a need for reconstruction of traditional internal behaviours and design methodologies. Application of organisational learning is now a well recognised principle for innovative companies to achieve long term growth and sustained technical development, and hence, greater market command. It focuses on the process by which the organisation's knowledge and value base changes, leading to improved problem solving ability and capacity for action. From the perspective of availability contracting, knowledge and the processes by which it is generated, used and retained, become primary assets within the learning organisation. This paper will introduce the application of digital methods to asset management by demonstrating how the process of learning can benefit from a digital approach, how product and process design can be integrated within a virtual framework and finally how the approach can be applied in a service context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mobile malware has continued to grow at an alarming rate despite on-going mitigation efforts. This has been much more prevalent on Android due to being an open platform that is rapidly overtaking other competing platforms in the mobile smart devices market. Recently, a new generation of Android malware families has emerged with advanced evasion capabilities which make them much more difficult to detect using conventional methods. This paper proposes and investigates a parallel machine learning based classification approach for early detection of Android malware. Using real malware samples and benign applications, a composite classification model is developed from parallel combination of heterogeneous classifiers. The empirical evaluation of the model under different combination schemes demonstrates its efficacy and potential to improve detection accuracy. More importantly, by utilizing several classifiers with diverse characteristics, their strengths can be harnessed not only for enhanced Android malware detection but also quicker white box analysis by means of the more interpretable constituent classifiers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates camera control for capturing bottle cap target images in the fault-detection system of an industrial production line. The main purpose is to identify the targeted bottle caps accurately in real time from the images. This is achieved by combining iterative learning control and Kalman filtering to reduce the effect of various disturbances introduced into the detection system. A mathematical model, together with a physical simulation platform is established based on the actual production requirements, and the convergence properties of the model are analyzed. It is shown that the proposed method enables accurate real-time control of the camera, and further, the gain range of the learning rule is also obtained. The numerical simulation and experimental results confirm that the proposed method can not only reduce the effect of repeatable disturbances but also non-repeatable ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With over 50 billion downloads and more than 1.3 million apps in Google’s official market, Android has continued to gain popularity amongst smartphone users worldwide. At the same time there has been a rise in malware targeting the platform, with more recent strains employing highly sophisticated detection avoidance techniques. As traditional signature based methods become less potent in detecting unknown malware, alternatives are needed for timely zero-day discovery. Thus this paper proposes an approach that utilizes ensemble learning for Android malware detection. It combines advantages of static analysis with the efficiency and performance of ensemble machine learning to improve Android malware detection accuracy. The machine learning models are built using a large repository of malware samples and benign apps from a leading antivirus vendor. Experimental results and analysis presented shows that the proposed method which uses a large feature space to leverage the power of ensemble learning is capable of 97.3 % to 99% detection accuracy with very low false positive rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Personal response systems using hardware such as 'clickers' have been around for some time, however their use is often restricted to multiple choice questions (MCQs) and they are therefore used as a summative assessment tool for the individual student. More recent innovations such as 'Socrative' have removed the need for specialist hardware, instead utilising web-based technology and devices common to students, such as smartphones, tablets and laptops. While improving the potential for use in larger classrooms, this also creates the opportunity to pose more engaging open-response questions to students who can 'text in' their thoughts on questions posed in class. This poster will present two applications of the Socrative system in an undergraduate psychology curriculum which aimed to encourage interactive engagement with course content using real-time student responses and lecturer feedback. Data is currently being collected and result will be presented at the conference.
The first application used Socrative to pose MCQs at the end of two modules (a level one Statistics module and level two Individual Differences Psychology module, class size N≈100), with the intention of helping students assess their knowledge of the course. They were asked to rate their self-perceived knowledge of the course on a five-point Likert scale before and after completing the MCQs, as well as their views on the value of the revision session and any issues that had with using the app. The online MCQs remained open between the lecture and the exam, allowing students to revisit the questions at any time during their revision.
This poster will present data regarding the usefulness of the revision MCQs, the metacognitive effect of the MCQs on student's judgements of learning (pre vs post MCQ testing), as well as student engagement with the MCQs between the revision session and the examination. Student opinions on the use of the Socrative system in class will also be discussed.
The second application used Socrative to facilitate a flipped classroom lecture on a level two 'Conceptual Issues in Psychology' module, class size N≈100). The content of this module requires students to think critically about historical and contemporary conceptual issues in psychology and the philosophy of science. Students traditionally struggle with this module due to the emphasis on critical thinking skills, rather than simply the retention of concrete knowledge. To prepare students for the written examination, a flipped classroom lecture was held at the end of the semester. Students were asked to revise their knowledge of a particular area of Psychology by assigned reading, and were told that the flipped lecture would involve them thinking critically about the conceptual issues found in this area. They were informed that questions would be posed by the lecturer in class, and that they would be asked to post their thoughts using the Socrative app for a class discussion. The level of preparation students engaged in for the flipped lecture was measured, as well as qualitative opinions on the usefulness of the session. This poster will discuss the level of student engagement with the flipped lecture, both in terms of preparation for the lecture, and engagement with questions posed during the lecture, as well as the lecturer's experience in facilitating the flipped classroom using the Socrative platform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tese de doutoramento, Educação (TIC na Educação), Universidade de Lisboa, Instituto de Educação, 2015