87 resultados para Automated Cryptanalysis
Resumo:
The discovery and clinical application of molecular biomarkers in solid tumors, increasingly relies on nucleic acid extraction from FFPE tissue sections and subsequent molecular profiling. This in turn requires the pathological review of haematoxylin & eosin (H&E) stained slides, to ensure sample quality, tumor DNA sufficiency by visually estimating the percentage tumor nuclei and tumor annotation for manual macrodissection. In this study on NSCLC, we demonstrate considerable variation in tumor nuclei percentage between pathologists, potentially undermining the precision of NSCLC molecular evaluation and emphasising the need for quantitative tumor evaluation. We subsequently describe the development and validation of a system called TissueMark for automated tumor annotation and percentage tumor nuclei measurement in NSCLC using computerized image analysis. Evaluation of 245 NSCLC slides showed precise automated tumor annotation of cases using Tissuemark, strong concordance with manually drawn boundaries and identical EGFR mutational status, following manual macrodissection from the image analysis generated tumor boundaries. Automated analysis of cell counts for % tumor measurements by Tissuemark showed reduced variability and significant correlation (p < 0.001) with benchmark tumor cell counts. This study demonstrates a robust image analysis technology that can facilitate the automated quantitative analysis of tissue samples for molecular profiling in discovery and diagnostics.
Resumo:
We have designed software that can â€â€™look’’ at recorded ultrasound sequences. We analyzed fifteen video sequences representing recorded ultrasound scans of nine fetuses. Our method requires a small amount of user labelled pixels for processing the first frame. These initialize GrowCut 1 , a background removal algorithm, which was used for separating the fetus from its surrounding environment (segmentation). For each subsequent frame, user input is no longer necessary as some of the pixels will inherit labels from the previously processed frame. This results in our software’s ability to track movement. Two sonographers rated the results of our computer’s â€vision’ on a scale from 1 (poor fit) to 10 (excellent fit). They assessed tracking accuracy for the entire video as well as segmentation accuracy (the ability to identify fetus from non-fetus) for every 100th processed frame. There was no appreciable deterioration in the software’s ability to track the fetus over time. I
Resumo:
This paper presents the applications of a novel methodology to quantify saltwater intrusion parameters in laboratory-scale experiments. The methodology uses an automated image analysis procedure, minimizing manual inputs and the subsequent systematic errors that can be introduced. This allowed the quantification of the width of the mixing zone which is difficult to measure in experimental methods that are based on visual observations. Glass beads of different grain sizes were tested for both steady-state and transient conditions. The transient results showed good correlation between experimental and numerical intrusion rates. The experimental intrusion rates revealed that the saltwater wedge reached a steady state condition sooner while receding than advancing. The hydrodynamics of the experimental mixing zone exhibited similar
traits; a greater increase in the width of the mixing zone was observed in the receding saltwater wedge, which indicates faster fluid velocities and higher dispersion. The angle of intrusion analysis revealed the formation of a volume of diluted saltwater at the toe position when the saltwater wedge is prompted to recede. In addition, results of different physical repeats of the experiment produced an average coefficient of variation less than 0.18 of the measured toe length and width of the mixing zone.
Resumo:
The popularity of tri-axial accelerometer data loggers to quantify animal activity through the analysis of signature traces is increasing. However, there is no consensus on how to process the large data sets that these devices generate when recording at the necessary high sample rates. In addition, there have been few attempts to validate accelerometer traces with specific behaviours in non-domesticated terrestrial mammals.
Resumo:
This paper presents an automated design framework for the development of individual part forming tools for a composite stiffener. The framework uses parametrically developed design geometries for both the part and its layup tool. The framework has been developed with a functioning user interface where part / tool combinations are passed to a virtual environment for utility based assessment of their features and assemblability characteristics. The work demonstrates clear benefits in process design methods with conventional design timelines reduced from hours and days to minutes and seconds. The methods developed here were able to produce a digital mock up of a component with its associated layup tool in less than 3 minutes. The virtual environment presenting the design to the designer for interactive assembly planning was generated in 20 seconds. Challenges still exist in determining the level of reality required to provide an effective learning environment in the virtual world. Full representation of physical phenomena such as gravity, part clashes and the representation of standard build functions require further work to represent real physical phenomena more accurately.
Resumo:
Morphological changes in the retinal vascular network are associated with future risk of many systemic and vascular diseases. However, uncertainty over the presence and nature of some of these associations exists. Analysis of data from large population based studies will help to resolve these uncertainties. The QUARTZ (QUantitative Analysis of Retinal vessel Topology and siZe) retinal image analysis system allows automated processing of large numbers of retinal images. However, an image quality assessment module is needed to achieve full automation. In this paper, we propose such an algorithm, which uses the segmented vessel map to determine the suitability of retinal images for use in the creation of vessel morphometric data suitable for epidemiological studies. This includes an effective 3-dimensional feature set and support vector machine classification. A random subset of 800 retinal images from UK Biobank (a large prospective study of 500,000 middle aged adults; where 68,151 underwent retinal imaging) was used to examine the performance of the image quality algorithm. The algorithm achieved a sensitivity of 95.33% and a specificity of 91.13% for the detection of inadequate images. The strong performance of this image quality algorithm will make rapid automated analysis of vascular morphometry feasible on the entire UK Biobank dataset (and other large retinal datasets), with minimal operator involvement, and at low cost.
Resumo:
The popularity of Computing degrees in the UK has been increasing significantly over the past number of years. In Northern Ireland, from 2007 to 2015, there has been a 40% increase in acceptances to Computer Science degrees with England seeing a 60% increase over the same period (UCAS, 2016). However, this is tainted as Computer Science degrees also continue to maintain the highest dropout rates.
In Queen’s University Belfast we currently have a Level 1 intake of over 400 students across a number of computing pathways. Our drive as staff is to empower and motivate the students to fully engage with the course content. All students take a Java programming module the aim of which is to provide an understanding of the basic principles of object-oriented design. In order to assess these skills, we have developed Jigsaw Java as an innovative assessment tool offering intelligent, semi-supervised automated marking of code.
Jigsaw Java allows students to answer programming questions using a drag-and-drop interface to place code fragments into position. Their answer is compared to the sample solution and if it matches, marks are allocated accordingly. However, if a match is not found then the corresponding code is executed using sample data to determine if its logic is acceptable. If it is, the solution is flagged to be checked by staff and if satisfactory is saved as an alternative solution. This means that appropriate marks can be allocated and should another student have submitted the same placement of code fragments this does not need to be executed or checked again. Rather the system now knows how to assess it.
Jigsaw Java is also able to consider partial marks dependent on code placement and will “learn” over time. Given the number of students, Jigsaw Java will improve the consistency and timeliness of marking.
Resumo:
Android OS supports multiple communication methods between apps. This opens the possibility to carry out threats in a collaborative fashion, c.f. the Soundcomber example from 2011. In this paper we provide a concise definition of collusion and report on a number of automated detection approaches, developed in co-operation with Intel Security.
Resumo:
The annotation of Business Dynamics models with parameters and equations, to simulate the system under study and further evaluate its simulation output, typically involves a lot of manual work. In this paper we present an approach for automated equation formulation of a given Causal Loop Diagram (CLD) and a set of associated time series with the help of neural network evolution (NEvo). NEvo enables the automated retrieval of surrogate equations for each quantity in the given CLD, hence it produces a fully annotated CLD that can be used for later simulations to predict future KPI development. In the end of the paper, we provide a detailed evaluation of NEvo on a business use-case to demonstrate its single step prediction capabilities.
Resumo:
This paper examines the integration of a tolerance design process within the Computer-Aided Design (CAD) environment having identified the potential to create an intelligent Digital Mock-Up [1]. The tolerancing process is complex in nature and as such reliance on Computer-Aided Tolerancing (CAT) software and domain experts can create a disconnect between the design and manufacturing disciplines It is necessary to implement the tolerance design procedure at the earliest opportunity to integrate both disciplines and to reduce workload in tolerance analysis and allocation at critical stages in product development when production is imminent.
The work seeks to develop a methodology that will allow for a preliminary tolerance allocation procedure within CAD. An approach to tolerance allocation based on sensitivity analysis is implemented on a simple assembly to review its contribution to an intelligent DMU. The procedure is developed using Python scripting for CATIA V5, with analysis results aligning with those in literature. A review of its implementation and requirements is presented.
Resumo:
Android is becoming ubiquitous and currently has the largest share of the mobile OS market with billions of application downloads from the official app market. It has also become the platform most targeted by mobile malware that are becoming more sophisticated to evade state-of-the-art detection approaches. Many Android malware families employ obfuscation techniques in order to avoid detection and this may defeat static analysis based approaches. Dynamic analysis on the other hand may be used to overcome this limitation. Hence in this paper we propose DynaLog, a dynamic analysis based framework for characterizing Android applications. The framework provides the capability to analyse the behaviour of applications based on an extensive number of dynamic features. It provides an automated platform for mass analysis and characterization of apps that is useful for quickly identifying and isolating malicious applications. The DynaLog framework leverages existing open source tools to extract and log high level behaviours, API calls, and critical events that can be used to explore the characteristics of an application, thus providing an extensible dynamic analysis platform for detecting Android malware. DynaLog is evaluated using real malware samples and clean applications demonstrating its capabilities for effective analysis and detection of malicious applications.