835 resultados para Automated Cryptanalysis


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computed tomography (CT) is a valuable technology to the healthcare enterprise as evidenced by the more than 70 million CT exams performed every year. As a result, CT has become the largest contributor to population doses amongst all medical imaging modalities that utilize man-made ionizing radiation. Acknowledging the fact that ionizing radiation poses a health risk, there exists the need to strike a balance between diagnostic benefit and radiation dose. Thus, to ensure that CT scanners are optimally used in the clinic, an understanding and characterization of image quality and radiation dose are essential.

The state-of-the-art in both image quality characterization and radiation dose estimation in CT are dependent on phantom based measurements reflective of systems and protocols. For image quality characterization, measurements are performed on inserts imbedded in static phantoms and the results are ascribed to clinical CT images. However, the key objective for image quality assessment should be its quantification in clinical images; that is the only characterization of image quality that clinically matters as it is most directly related to the actual quality of clinical images. Moreover, for dose estimation, phantom based dose metrics, such as CT dose index (CTDI) and size specific dose estimates (SSDE), are measured by the scanner and referenced as an indicator for radiation exposure. However, CTDI and SSDE are surrogates for dose, rather than dose per-se.

Currently there are several software packages that track the CTDI and SSDE associated with individual CT examinations. This is primarily the result of two causes. The first is due to bureaucracies and governments pressuring clinics and hospitals to monitor the radiation exposure to individuals in our society. The second is due to the personal concerns of patients who are curious about the health risks associated with the ionizing radiation exposure they receive as a result of their diagnostic procedures.

An idea that resonates with clinical imaging physicists is that patients come to the clinic to acquire quality images so they can receive a proper diagnosis, not to be exposed to ionizing radiation. Thus, while it is important to monitor the dose to patients undergoing CT examinations, it is equally, if not more important to monitor the image quality of the clinical images generated by the CT scanners throughout the hospital.

The purposes of the work presented in this thesis are threefold: (1) to develop and validate a fully automated technique to measure spatial resolution in clinical CT images, (2) to develop and validate a fully automated technique to measure image contrast in clinical CT images, and (3) to develop a fully automated technique to estimate radiation dose (not surrogates for dose) from a variety of clinical CT protocols.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To investigate the effect of incorporating a beam spreading parameter in a beam angle optimization algorithm and to evaluate its efficacy for creating coplanar IMRT lung plans in conjunction with machine learning generated dose objectives.

Methods: Fifteen anonymized patient cases were each re-planned with ten values over the range of the beam spreading parameter, k, and analyzed with a Wilcoxon signed-rank test to determine whether any particular value resulted in significant improvement over the initially treated plan created by a trained dosimetrist. Dose constraints were generated by a machine learning algorithm and kept constant for each case across all k values. Parameters investigated for potential improvement included mean lung dose, V20 lung, V40 heart, 80% conformity index, and 90% conformity index.

Results: With a confidence level of 5%, treatment plans created with this method resulted in significantly better conformity indices. Dose coverage to the PTV was improved by an average of 12% over the initial plans. At the same time, these treatment plans showed no significant difference in mean lung dose, V20 lung, or V40 heart when compared to the initial plans; however, it should be noted that these results could be influenced by the small sample size of patient cases.

Conclusions: The beam angle optimization algorithm, with the inclusion of the beam spreading parameter k, increases the dose conformity of the automatically generated treatment plans over that of the initial plans without adversely affecting the dose to organs at risk. This parameter can be varied according to physician preference in order to control the tradeoff between dose conformity and OAR sparing without compromising the integrity of the plan.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Safeguarding organizations against opportunism and severe deception in computer-mediated communication (CMC) presents a major challenge to CIOs and IT managers. New insights into linguistic cues of deception derive from the speech acts innate to CMC. Applying automated text analysis to archival email exchanges in a CMC system as part of a reward program, we assess the ability of word use (micro-level), message development (macro-level), and intertextual exchange cues (meta-level) to detect severe deception by business partners. We empirically assess the predictive ability of our framework using an ordinal multilevel regression model. Results indicate that deceivers minimize the use of referencing and self-deprecation but include more superfluous descriptions and flattery. Deceitful channel partners also over structure their arguments and rapidly mimic the linguistic style of the account manager across dyadic e-mail exchanges. Thanks to its diagnostic value, the proposed framework can support firms’ decision-making and guide compliance monitoring system development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The popularity of Computing degrees in the UK has been increasing significantly over the past number of years. In Northern Ireland, from 2007 to 2015, there has been a 40% increase in acceptances to Computer Science degrees with England seeing a 60% increase over the same period (UCAS, 2016). However, this is tainted as Computer Science degrees also continue to maintain the highest dropout rates.
In Queen’s University Belfast we currently have a Level 1 intake of over 400 students across a number of computing pathways. Our drive as staff is to empower and motivate the students to fully engage with the course content. All students take a Java programming module the aim of which is to provide an understanding of the basic principles of object-oriented design. In order to assess these skills, we have developed Jigsaw Java as an innovative assessment tool offering intelligent, semi-supervised automated marking of code.
Jigsaw Java allows students to answer programming questions using a drag-and-drop interface to place code fragments into position. Their answer is compared to the sample solution and if it matches, marks are allocated accordingly. However, if a match is not found then the corresponding code is executed using sample data to determine if its logic is acceptable. If it is, the solution is flagged to be checked by staff and if satisfactory is saved as an alternative solution. This means that appropriate marks can be allocated and should another student have submitted the same placement of code fragments this does not need to be executed or checked again. Rather the system now knows how to assess it.
Jigsaw Java is also able to consider partial marks dependent on code placement and will “learn” over time. Given the number of students, Jigsaw Java will improve the consistency and timeliness of marking.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Android OS supports multiple communication methods between apps. This opens the possibility to carry out threats in a collaborative fashion, c.f. the Soundcomber example from 2011. In this paper we provide a concise definition of collusion and report on a number of automated detection approaches, developed in co-operation with Intel Security.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The annotation of Business Dynamics models with parameters and equations, to simulate the system under study and further evaluate its simulation output, typically involves a lot of manual work. In this paper we present an approach for automated equation formulation of a given Causal Loop Diagram (CLD) and a set of associated time series with the help of neural network evolution (NEvo). NEvo enables the automated retrieval of surrogate equations for each quantity in the given CLD, hence it produces a fully annotated CLD that can be used for later simulations to predict future KPI development. In the end of the paper, we provide a detailed evaluation of NEvo on a business use-case to demonstrate its single step prediction capabilities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper examines the integration of a tolerance design process within the Computer-Aided Design (CAD) environment having identified the potential to create an intelligent Digital Mock-Up [1]. The tolerancing process is complex in nature and as such reliance on Computer-Aided Tolerancing (CAT) software and domain experts can create a disconnect between the design and manufacturing disciplines It is necessary to implement the tolerance design procedure at the earliest opportunity to integrate both disciplines and to reduce workload in tolerance analysis and allocation at critical stages in product development when production is imminent.
The work seeks to develop a methodology that will allow for a preliminary tolerance allocation procedure within CAD. An approach to tolerance allocation based on sensitivity analysis is implemented on a simple assembly to review its contribution to an intelligent DMU. The procedure is developed using Python scripting for CATIA V5, with analysis results aligning with those in literature. A review of its implementation and requirements is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Android is becoming ubiquitous and currently has the largest share of the mobile OS market with billions of application downloads from the official app market. It has also become the platform most targeted by mobile malware that are becoming more sophisticated to evade state-of-the-art detection approaches. Many Android malware families employ obfuscation techniques in order to avoid detection and this may defeat static analysis based approaches. Dynamic analysis on the other hand may be used to overcome this limitation. Hence in this paper we propose DynaLog, a dynamic analysis based framework for characterizing Android applications. The framework provides the capability to analyse the behaviour of applications based on an extensive number of dynamic features. It provides an automated platform for mass analysis and characterization of apps that is useful for quickly identifying and isolating malicious applications. The DynaLog framework leverages existing open source tools to extract and log high level behaviours, API calls, and critical events that can be used to explore the characteristics of an application, thus providing an extensible dynamic analysis platform for detecting Android malware. DynaLog is evaluated using real malware samples and clean applications demonstrating its capabilities for effective analysis and detection of malicious applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automated acceptance testing is the testing of software done in higher level to test whether the system abides by the requirements desired by the business clients by the use of piece of script other than the software itself. This project is a study of the feasibility of acceptance tests written in Behavior Driven Development principle. The project includes an implementation part where automated accep- tance testing is written for Touch-point web application developed by Dewire (a software consultant company) for Telia (a telecom company) from the require- ments received from the customer (Telia). The automated acceptance testing is in Cucumber-Selenium framework which enforces Behavior Driven Development principles. The purpose of the implementation is to verify the practicability of this style of acceptance testing. From the completion of implementation, it was concluded that all the requirements from customer in real world can be converted into executable specifications and the process was not at all time-consuming or difficult for a low-experienced programmer like the author itself. The project also includes survey to measure the learnability and understandability of Gherkin- the language that Cucumber understands. The survey consist of some Gherkin exam- ples followed with questions that include making changes to the Gherkin exam- ples. Survey had 3 parts: first being easy, second medium and third most difficult. Survey also had a linear scale from 1 to 5 to rate the difficulty level for each part of the survey. 1 stood for very easy and 5 for very difficult. Time when the partic- ipants began the survey was also taken in order to calculate the total time taken by the participants to learn and answer the questions. Survey was taken by 18 of the employers of Dewire who had primary working role as one of the programmer, tester and project manager. In the result, tester and project manager were grouped as non-programmer. The survey concluded that it is very easy and quick to learn Gherkin. While the participants rated Gherkin as very easy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An introductory exercise to practice when reading papers

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in communication, navigation and imaging technologies are expected to fundamentally change methods currently used to collect data. Electronic data interchange strategies will also minimize data handling and automatically update files at the point of capture. This report summarizes the outcome of using a multi-camera platform as a method to collect roadway inventory data. It defines basic system requirements as expressed by users, who applied these techniques and examines how the application of the technology met those needs. A sign inventory case study was used to determine the advantages of creating and maintaining the database and provides the capability to monitor performance criteria for a Safety Management System. The project identified at least 75 percent of the data elements needed for a sign inventory can be gathered by viewing a high resolution image.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The automated transfer of flight logbook information from aircrafts into aircraft maintenance systems leads to reduced ground and maintenance time and is thus desirable from an economical point of view. Until recently, flight logbooks have not been managed electronically in aircrafts or at least the data transfer from aircraft to ground maintenance system has been executed manually. Latest aircraft types such as the Airbus A380 or the Boeing 787 do support an electronic logbook and thus make an automated transfer possible. A generic flight logbook transfer system must deal with different data formats on the input side – due to different aircraft makes and models – as well as different, distributed aircraft maintenance systems for different airlines as aircraft operators. This article contributes the concept and top level distributed system architecture of such a generic system for automated flight log data transfer. It has been developed within a joint industry and applied research project. The architecture has already been successfully evaluated in a prototypical implementation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: Phenobarbital increases electroclinical uncoupling and our preliminary observations suggest it may also affect electrographic seizure morphology. This may alter the performance of a novel seizure detection algorithm (SDA) developed by our group. The objectives of this study were to compare the morphology of seizures before and after phenobarbital administration in neonates and to determine the effect of any changes on automated seizure detection rates. Methods: The EEGs of 18 term neonates with seizures both pre- and post-phenobarbital (524 seizures) administration were studied. Ten features of seizures were manually quantified and summary measures for each neonate were statistically compared between pre- and post-phenobarbital seizures. SDA seizure detection rates were also compared. Results: Post-phenobarbital seizures showed significantly lower amplitude (p < 0.001) and involved fewer EEG channels at the peak of seizure (p < 0.05). No other features or SDA detection rates showed a statistical difference. Conclusion: These findings show that phenobarbital reduces both the amplitude and propagation of seizures which may help to explain electroclinical uncoupling of seizures. The seizure detection rate of the algorithm was unaffected by these changes. Significance: The results suggest that users should not need to adjust the SDA sensitivity threshold after phenobarbital administration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As a way to gain greater insights into the operation of online communities, this dissertation applies automated text mining techniques to text-based communication to identify, describe and evaluate underlying social networks among online community members. The main thrust of the study is to automate the discovery of social ties that form between community members, using only the digital footprints left behind in their online forum postings. Currently, one of the most common but time consuming methods for discovering social ties between people is to ask questions about their perceived social ties. However, such a survey is difficult to collect due to the high investment in time associated with data collection and the sensitive nature of the types of questions that may be asked. To overcome these limitations, the dissertation presents a new, content-based method for automated discovery of social networks from threaded discussions, referred to as ‘name network’. As a case study, the proposed automated method is evaluated in the context of online learning communities. The results suggest that the proposed ‘name network’ method for collecting social network data is a viable alternative to costly and time-consuming collection of users’ data using surveys. The study also demonstrates how social networks produced by the ‘name network’ method can be used to study online classes and to look for evidence of collaborative learning in online learning communities. For example, educators can use name networks as a real time diagnostic tool to identify students who might need additional help or students who may provide such help to others. Future research will evaluate the usefulness of the ‘name network’ method in other types of online communities.