956 resultados para second pre-image attack


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computed tomography (CT) is a valuable technology to the healthcare enterprise as evidenced by the more than 70 million CT exams performed every year. As a result, CT has become the largest contributor to population doses amongst all medical imaging modalities that utilize man-made ionizing radiation. Acknowledging the fact that ionizing radiation poses a health risk, there exists the need to strike a balance between diagnostic benefit and radiation dose. Thus, to ensure that CT scanners are optimally used in the clinic, an understanding and characterization of image quality and radiation dose are essential.

The state-of-the-art in both image quality characterization and radiation dose estimation in CT are dependent on phantom based measurements reflective of systems and protocols. For image quality characterization, measurements are performed on inserts imbedded in static phantoms and the results are ascribed to clinical CT images. However, the key objective for image quality assessment should be its quantification in clinical images; that is the only characterization of image quality that clinically matters as it is most directly related to the actual quality of clinical images. Moreover, for dose estimation, phantom based dose metrics, such as CT dose index (CTDI) and size specific dose estimates (SSDE), are measured by the scanner and referenced as an indicator for radiation exposure. However, CTDI and SSDE are surrogates for dose, rather than dose per-se.

Currently there are several software packages that track the CTDI and SSDE associated with individual CT examinations. This is primarily the result of two causes. The first is due to bureaucracies and governments pressuring clinics and hospitals to monitor the radiation exposure to individuals in our society. The second is due to the personal concerns of patients who are curious about the health risks associated with the ionizing radiation exposure they receive as a result of their diagnostic procedures.

An idea that resonates with clinical imaging physicists is that patients come to the clinic to acquire quality images so they can receive a proper diagnosis, not to be exposed to ionizing radiation. Thus, while it is important to monitor the dose to patients undergoing CT examinations, it is equally, if not more important to monitor the image quality of the clinical images generated by the CT scanners throughout the hospital.

The purposes of the work presented in this thesis are threefold: (1) to develop and validate a fully automated technique to measure spatial resolution in clinical CT images, (2) to develop and validate a fully automated technique to measure image contrast in clinical CT images, and (3) to develop a fully automated technique to estimate radiation dose (not surrogates for dose) from a variety of clinical CT protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Photographs from the February 1997 Bermuda meeting. Courtesy of Gert-Jan van Ommen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes the development of an open-source system for virtual bronchoscopy used in combination with electromagnetic instrument tracking. The end application is virtual navigation of the lung for biopsy of early stage cancer nodules. The open-source platform 3D Slicer was used for creating freely available algorithms for virtual bronchscopy. Firstly, the development of an open-source semi-automatic algorithm for prediction of solitary pulmonary nodule malignancy is presented. This approach may help the physician decide whether to proceed with biopsy of the nodule. The user-selected nodule is segmented in order to extract radiological characteristics (i.e., size, location, edge smoothness, calcification presence, cavity wall thickness) which are combined with patient information to calculate likelihood of malignancy. The overall accuracy of the algorithm is shown to be high compared to independent experts' assessment of malignancy. The algorithm is also compared with two different predictors, and our approach is shown to provide the best overall prediction accuracy. The development of an airway segmentation algorithm which extracts the airway tree from surrounding structures on chest Computed Tomography (CT) images is then described. This represents the first fundamental step toward the creation of a virtual bronchoscopy system. Clinical and ex-vivo images are used to evaluate performance of the algorithm. Different CT scan parameters are investigated and parameters for successful airway segmentation are optimized. Slice thickness is the most affecting parameter, while variation of reconstruction kernel and radiation dose is shown to be less critical. Airway segmentation is used to create a 3D rendered model of the airway tree for virtual navigation. Finally, the first open-source virtual bronchoscopy system was combined with electromagnetic tracking of the bronchoscope for the development of a GPS-like system for navigating within the lungs. Tools for pre-procedural planning and for helping with navigation are provided. Registration between the lungs of the patient and the virtually reconstructed airway tree is achieved using a landmark-based approach. In an attempt to reduce difficulties with registration errors, we also implemented a landmark-free registration method based on a balanced airway survey. In-vitro and in-vivo testing showed good accuracy for this registration approach. The centreline of the 3D airway model is extracted and used to compensate for possible registration errors. Tools are provided to select a target for biopsy on the patient CT image, and pathways from the trachea towards the selected targets are automatically created. The pathways guide the physician during navigation, while distance to target information is updated in real-time and presented to the user. During navigation, video from the bronchoscope is streamed and presented to the physician next to the 3D rendered image. The electromagnetic tracking is implemented with 5 DOF sensing that does not provide roll rotation information. An intensity-based image registration approach is implemented to rotate the virtual image according to the bronchoscope's rotations. The virtual bronchoscopy system is shown to be easy to use and accurate in replicating the clinical setting, as demonstrated in the pre-clinical environment of a breathing lung method. Animal studies were performed to evaluate the overall system performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Diverses œuvres de poésie moderne et contemporaine mettent en scène le rapport à l’écriture d’un sujet lyrique. Une telle problématique trouve une incarnation particulièrement intéressante dans l’œuvre de Patrice Desbiens, notamment dans certains de ses textes des années 1990 et 2000, où elle apparaît avec plus d’acuité. Pourtant, sa pratique auto-réflexive a fait l’objet de très peu de recherches. Afin d’éclairer le rapport qu’entretient Patrice Desbiens avec l’écriture et avec la poésie, ce mémoire s’intéresse à deux de ses textes, soit La fissure de la fiction (1997) et Désâmé, (2005) en accordant davantage d’espace au premier, que je considère comme un texte-charnière dans la production poétique de Desbiens. Dans un premier temps, mon travail présente ainsi la précarité qui caractérise le protagoniste de La fissure de la fiction et, sous un autre angle, le sujet lyrique de Désâmé. Dans cette optique, la figure du poète est étudiée dans La fissure de la fiction à la lumière de la reprise ironique du mythe de la malédiction littéraire et du sens que la réactualisation de ce mythe confère au personnage dans ce récit poétique. Dans un second temps, ce mémoire s’attache à montrer que la cohérence et la vraisemblance des univers mis en scène dans La fissure de la fiction et Désâmé sont minées. C’est à l’aune de ces analyses que peut ensuite être envisagé le rôle d’une poésie qui, en dernière instance, comporte malgré tout un caractère consolateur, en dépit ou en raison de l’esthétique du grotesque, tantôt comique, tantôt tragique, dans laquelle elle s’inscrit et que nous tâcherons de mettre en lumière.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scientists planning to use underwater stereoscopic image technologies are often faced with numerous problems during the methodological implementations: commercial equipment is too expensive; the setup or calibration is too complex; or the imaging processing (i.e. measuring objects in the stereo-images) is too complicated to be performed without a time-consuming phase of training and evaluation. The present paper addresses some of these problems and describes a workflow for stereoscopic measurements for marine biologists. It also provides instructions on how to assemble an underwater stereo-photographic system with two digital consumer cameras and gives step-by-step guidelines for setting up the hardware. The second part details a software procedure to correct stereo-image pairs for lens distortions, which is especially important when using cameras with non-calibrated optical units. The final part presents a guide to the process of measuring the lengths (or distances) of objects in stereoscopic image pairs. To reveal the applicability and the restrictions of the described systems and to test the effects of different types of camera (a compact camera and an SLR type), experiments were performed to determine the precision and accuracy of two generic stereo-imaging units: a diver-operated system based on two Olympus Mju 1030SW compact cameras and a cable-connected observatory system based on two Canon 1100D SLR cameras. In the simplest setup without any correction for lens distortion, the low-budget Olympus Mju 1030SW system achieved mean accuracy errors (percentage deviation of a measurement from the object's real size) between 10.2 and -7.6% (overall mean value: -0.6%), depending on the size, orientation and distance of the measured object from the camera. With the single lens reflex (SLR) system, very similar values between 10.1% and -3.4% (overall mean value: -1.2%) were observed. Correction of the lens distortion significantly improved the mean accuracy errors of either system. Even more, system precision (spread of the accuracy) improved significantly in both systems. Neither the use of a wide-angle converter nor multiple reassembly of the system had a significant negative effect on the results. The study shows that underwater stereophotography, independent of the system, has a high potential for robust and non-destructive in situ sampling and can be used without prior specialist training.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a technique to defeat Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks in Ad Hoc Networks. The technique is divided into two main parts and with game theory and cryptographic puzzles. Introduced first is a new client puzzle to prevent DoS attacks in such networks. The second part presents a multiplayer game that takes place between the nodes of an ad hoc network and based on fundamental principles of game theory. By combining computational problems with puzzles, improvement occurs in the efficiency and latency of the communicating nodes and resistance in DoS and DDoS attacks. Experimental results show the effectiveness of the approach for devices with limited resources and for environments like ad hoc networks where nodes must exchange information quickly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an extension to the energy vector, well known in the Ambisonics literature, to improve its predictions of localisation at off-centre listening positions. In determining the source direction, a perceptual weight is assigned to each loudspeaker gain, taking into account the relative arrival times, levels, and directions of the loudspeaker signals. The proposed model is evaluated alongside the original energy vector and two binaural models through comparison with the results of recent perceptual studies. The extended version was found to provide results that were at least 50% more accurate than the second best predictor for two experiments involving off-centre listeners with first- and third-order Ambisonics systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present DES14X3taz, a new hydrogen-poor superluminous supernova (SLSN-I) discovered by the Dark Energy Survey (DES) supernova program, with additional photometric data provided by the Survey Using DECam for Superluminous Supernovae. Spectra obtained using Optical System for Imaging and low-Intermediate-Resolution Integrated Spectroscopy on the Gran Telescopio CANARIAS show DES14X3taz is an SLSN-I at z = 0.608. Multi-color photometry reveals a double-peaked light curve: a blue and relatively bright initial peak that fades rapidly prior to the slower rise of the main light curve. Our multi-color photometry allows us, for the first time, to show that the initial peak cools from 22,000 to 8000 K over 15 rest-frame days, and is faster and brighter than any published core-collapse supernova, reaching 30% of the bolometric luminosity of the main peak. No physical 56Ni-powered model can fit this initial peak. We show that a shock-cooling model followed by a magnetar driving the second phase of the light curve can adequately explain the entire light curve of DES14X3taz. Models involving the shock-cooling of extended circumstellar material at a distance of 400  are preferred over the cooling of shock-heated surface layers of a stellar envelope. We compare DES14X3taz to the few double-peaked SLSN-I events in the literature. Although the rise times and characteristics of these initial peaks differ, there exists the tantalizing possibility that they can be explained by one physical interpretation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the largest contributor to renewable energy, biomass (especially lignocellulosic biomass) has significant potential to address atmospheric emission and energy shortage issues. The bio-fuels derived from lignocellulosic biomass are popularly referred to as second-generation bio-fuels. To date, several thermochemical conversion pathways for the production of second-generation bio-fuels have shown commercial promise; however, most of these remain at various pre-commercial stages. In view of their imminent commercialization, it is important to conduct a profound and comprehensive comparison of these production techniques. Accordingly, the scope of this review is to fill this essential knowledge gap by mapping the entire value chain of second-generation bio-fuels, from technical, economic, and environmental perspectives. This value chain covers i) the thermochemical technologies used to convert solid biomass feedstock into easier-to-handle intermediates, such as bio-oil, syngas, methanol, and Fischer-Tropsch fuel; and ii) the upgrading technologies used to convert intermediates into end products, including diesel, gasoline, renewable jet fuels, hydrogen, char, olefins, and oxygenated compounds. This review also provides an economic and commercial assessment of these technologies, with the aim of identifying the most adaptable technology for the production of bio-fuels, fuel additives, and bio-chemicals. A detailed mapping of the carbon footprints of the various thermochemical routes to second-generation bio-fuels is also carried out. The review concludes by identifying key challenges and future trends for second-generation petroleum substitute bio-fuels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To examine the association between fatty acid binding protein 4 (FABP4) and pre-eclampsia risk in women with type 1 diabetes.
Reesearch Design and Methods: Serum FABP4 was measured in 710 women from the Diabetes and Pre-eclampsia Intervention Trial (DAPIT) in early pregnancy and in the second trimester (median 14 and 26 weeks gestation, respectively).
Results: FABP4 was significantly elevated in early pregnancy (geometric mean 15.8 ng/mL [interquartile range 11.6–21.4] vs. 12.7 ng/mL [interquartile range 9.6–17]; P < 0.001) and the second trimester (18.8 ng/mL [interquartile range 13.6–25.8] vs. 14.6 ng/mL [interquartile range 10.8–19.7]; P < 0.001) in women in whom pre-eclampsia later developed. Elevated second-trimester FABP4 level was independently associated with pre-eclampsia (odds ratio 2.87 [95% CI 1.24, 6.68], P = 0.03). The addition of FABP4 to established risk factors significantly improved net reclassification improvement at both time points and integrated discrimination improvement in the second trimester.
Conclusions: Increased second-trimester FABP4 independently predicted pre-eclampsia and significantly improved reclassification and discrimination. FABP4 shows potential as a novel biomarker for pre-eclampsia prediction in women with type 1 diabetes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Residents tend to have high expectations regarding the benefits of hosting a mega- event, in particular the creation of new infrastructure, growth in GDP and employ- ment, image enhancement and the spin-offs of attracting tourists and fostering sustainable growth of the cultural supply (Jeong and Faulkner 1996; Deccio and Baloglu 2002; Gursoy and Kendall 2006; Getz 2008; Langen and Garcia 2009; Ritchie et al. 2009; Gursoy et al. 2011; Palonen 2011). Nevertheless, they normally recognise that some costs will be incurred (Kim and Petrick 2005; Kim et al. 2006; Ritchie et al. 2009; Gursoy et al. 2011; Lee et al. 2013). So, it was not surprising that the nomination of Guimaraes, a small city in the northwest of Portugal, as one of the two European Capitals of Culture in 2012 (2012 ECOC), had raised great expectations in the local community vis- a-vis its socio-economic and cultural benefits. Our research was designed to examine the Guimar~aes residents’ perceptions of the impacts of hosting the 2012 ECOC, approached at two different times: before and after the event, to try and capture the evolution of the residents’ assessment of its impacts. From the empirical literature, we know that residents’ perceived impacts tend to change as time goes by (Kim et al. 2006; Ritchie et al. 2009; Gursoy et al. 2011; Lee et al. 2013). The data were gathered via two surveys applied to Guimaraes residents, one in 2011, before the event, and the other afterwards, in 2013. The Guimaraes residents’ assessment was thought to be essential to get an accurate appraisal of the impact of the mega-event as they were a main part of the hosting process. 2012 ECOC impacts were mainly felt by local people who, in most cases, will go on feeling them in the short and long term. The research was thought to be socially pertinent as the opinions collected through the surveys can help to prevent repeating mistakes when similar mega- events are organised in the future, and to increase the positive impacts derived from hosting them. When we talk about the social pertinence of the empirical results, we want to stress that the expertise acquired can be useful to any host city or country.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Au cours des dernières décennies, l’effort sur les applications de capteurs infrarouges a largement progressé dans le monde. Mais, une certaine difficulté demeure, en ce qui concerne le fait que les objets ne sont pas assez clairs ou ne peuvent pas toujours être distingués facilement dans l’image obtenue pour la scène observée. L’amélioration de l’image infrarouge a joué un rôle important dans le développement de technologies de la vision infrarouge de l’ordinateur, le traitement de l’image et les essais non destructifs, etc. Cette thèse traite de la question des techniques d’amélioration de l’image infrarouge en deux aspects, y compris le traitement d’une seule image infrarouge dans le domaine hybride espacefréquence, et la fusion d’images infrarouges et visibles employant la technique du nonsubsampled Contourlet transformer (NSCT). La fusion d’images peut être considérée comme étant la poursuite de l’exploration du modèle d’amélioration de l’image unique infrarouge, alors qu’il combine les images infrarouges et visibles en une seule image pour représenter et améliorer toutes les informations utiles et les caractéristiques des images sources, car une seule image ne pouvait contenir tous les renseignements pertinents ou disponibles en raison de restrictions découlant de tout capteur unique de l’imagerie. Nous examinons et faisons une enquête concernant le développement de techniques d’amélioration d’images infrarouges, et ensuite nous nous consacrons à l’amélioration de l’image unique infrarouge, et nous proposons un schéma d’amélioration de domaine hybride avec une méthode d’évaluation floue de seuil amélioré, qui permet d’obtenir une qualité d’image supérieure et améliore la perception visuelle humaine. Les techniques de fusion d’images infrarouges et visibles sont établies à l’aide de la mise en oeuvre d’une mise en registre précise des images sources acquises par différents capteurs. L’algorithme SURF-RANSAC est appliqué pour la mise en registre tout au long des travaux de recherche, ce qui conduit à des images mises en registre de façon très précise et des bénéfices accrus pour le traitement de fusion. Pour les questions de fusion d’images infrarouges et visibles, une série d’approches avancées et efficaces sont proposés. Une méthode standard de fusion à base de NSCT multi-canal est présente comme référence pour les approches de fusion proposées suivantes. Une approche conjointe de fusion, impliquant l’Adaptive-Gaussian NSCT et la transformée en ondelettes (Wavelet Transform, WT) est propose, ce qui conduit à des résultats de fusion qui sont meilleurs que ceux obtenus avec les méthodes non-adaptatives générales. Une approche de fusion basée sur le NSCT employant la détection comprime (CS, compressed sensing) et de la variation totale (TV) à des coefficients d’échantillons clairsemés et effectuant la reconstruction de coefficients fusionnés de façon précise est proposée, qui obtient de bien meilleurs résultats de fusion par le biais d’une pré-amélioration de l’image infrarouge et en diminuant les informations redondantes des coefficients de fusion. Une procédure de fusion basée sur le NSCT utilisant une technique de détection rapide de rétrécissement itératif comprimé (fast iterative-shrinking compressed sensing, FISCS) est proposée pour compresser les coefficients décomposés et reconstruire les coefficients fusionnés dans le processus de fusion, qui conduit à de meilleurs résultats plus rapidement et d’une manière efficace.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern automobiles are no longer just mechanical tools. The electronics and computing services they are shipping with are making them not less than a computer. They are massive kinetic devices with sophisticated computing power. Most of the modern vehicles are made with the added connectivity in mind which may be vulnerable to outside attack. Researchers have shown that it is possible to infiltrate into a vehicle’s internal system remotely and control the physical entities such as steering and brakes. It is quite possible to experience such attacks on a moving vehicle and unable to use the controls. These massive connected computers can be life threatening as they are related to everyday lifestyle. First part of this research studied the attack surfaces in the automotive cybersecurity domain. It also illustrated the attack methods and capabilities of the damages. Online survey has been deployed as data collection tool to learn about the consumers’ usage of such vulnerable automotive services. The second part of the research portrayed the consumers’ privacy in automotive world. It has been found that almost hundred percent of modern vehicles has the capabilities to send vehicle diagnostic data as well as user generated data to their manufacturers, and almost thirty five percent automotive companies are collecting them already. Internet privacy has been studies before in many related domain but no privacy scale were matched for automotive consumers. It created the research gap and motivation for this thesis. A study has been performed to use well established consumers privacy scale – IUIPC to match with the automotive consumers’ privacy situation. Hypotheses were developed based on the IUIPC model for internet consumers’ privacy and they were studied by the finding from the data collection methods. Based on the key findings of the research, all the hypotheses were accepted and hence it is found that automotive consumers’ privacy did follow the IUIPC model under certain conditions. It is also found that a majority of automotive consumers use the services and devices that are vulnerable and prone to cyber-attacks. It is also established that there is a market for automotive cybersecurity services and consumers are willing to pay certain fees to avail that.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image processing offers unparalleled potential for traffic monitoring and control. For many years engineers have attempted to perfect the art of automatic data abstraction from sequences of video images. This paper outlines a research project undertaken at Napier University by the authors in the field of image processing for automatic traffic analysis. A software based system implementing TRIP algorithms to count cars and measure vehicle speed has been developed by members of the Transport Engineering Research Unit (TERU) at the University. The TRIP algorithm has been ported and evaluated on an IBM PC platform with a view to hardware implementation of the pre-processing routines required for vehicle detection. Results show that a software based traffic counting system is realisable for single window processing. Due to the high volume of data required to be processed for full frames or multiple lanes, system operations in real time are limited. Therefore specific hardware is required to be designed. The paper outlines a hardware design for implementation of inter-frame and background differencing, background updating and shadow removal techniques. Preliminary results showing the processing time and counting accuracy for the routines implemented in software are presented and a real time hardware pre-processing architecture is described.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SQL Injection Attack (SQLIA) remains a technique used by a computer network intruder to pilfer an organisation’s confidential data. This is done by an intruder re-crafting web form’s input and query strings used in web requests with malicious intent to compromise the security of an organisation’s confidential data stored at the back-end database. The database is the most valuable data source, and thus, intruders are unrelenting in constantly evolving new techniques to bypass the signature’s solutions currently provided in Web Application Firewalls (WAF) to mitigate SQLIA. There is therefore a need for an automated scalable methodology in the pre-processing of SQLIA features fit for a supervised learning model. However, obtaining a ready-made scalable dataset that is feature engineered with numerical attributes dataset items to train Artificial Neural Network (ANN) and Machine Leaning (ML) models is a known issue in applying artificial intelligence to effectively address ever evolving novel SQLIA signatures. This proposed approach applies numerical attributes encoding ontology to encode features (both legitimate web requests and SQLIA) to numerical data items as to extract scalable dataset for input to a supervised learning model in moving towards a ML SQLIA detection and prevention model. In numerical attributes encoding of features, the proposed model explores a hybrid of static and dynamic pattern matching by implementing a Non-Deterministic Finite Automaton (NFA). This combined with proxy and SQL parser Application Programming Interface (API) to intercept and parse web requests in transition to the back-end database. In developing a solution to address SQLIA, this model allows processed web requests at the proxy deemed to contain injected query string to be excluded from reaching the target back-end database. This paper is intended for evaluating the performance metrics of a dataset obtained by numerical encoding of features ontology in Microsoft Azure Machine Learning (MAML) studio using Two-Class Support Vector Machines (TCSVM) binary classifier. This methodology then forms the subject of the empirical evaluation.