954 resultados para scintillation detectors
Resumo:
Malicious software (malware) have significantly increased in terms of number and effectiveness during the past years. Until 2006, such software were mostly used to disrupt network infrastructures or to show coders’ skills. Nowadays, malware constitute a very important source of economical profit, and are very difficult to detect. Thousands of novel variants are released every day, and modern obfuscation techniques are used to ensure that signature-based anti-malware systems are not able to detect such threats. This tendency has also appeared on mobile devices, with Android being the most targeted platform. To counteract this phenomenon, a lot of approaches have been developed by the scientific community that attempt to increase the resilience of anti-malware systems. Most of these approaches rely on machine learning, and have become very popular also in commercial applications. However, attackers are now knowledgeable about these systems, and have started preparing their countermeasures. This has lead to an arms race between attackers and developers. Novel systems are progressively built to tackle the attacks that get more and more sophisticated. For this reason, a necessity grows for the developers to anticipate the attackers’ moves. This means that defense systems should be built proactively, i.e., by introducing some security design principles in their development. The main goal of this work is showing that such proactive approach can be employed on a number of case studies. To do so, I adopted a global methodology that can be divided in two steps. First, understanding what are the vulnerabilities of current state-of-the-art systems (this anticipates the attacker’s moves). Then, developing novel systems that are robust to these attacks, or suggesting research guidelines with which current systems can be improved. This work presents two main case studies, concerning the detection of PDF and Android malware. The idea is showing that a proactive approach can be applied both on the X86 and mobile world. The contributions provided on this two case studies are multifolded. With respect to PDF files, I first develop novel attacks that can empirically and optimally evade current state-of-the-art detectors. Then, I propose possible solutions with which it is possible to increase the robustness of such detectors against known and novel attacks. With respect to the Android case study, I first show how current signature-based tools and academically developed systems are weak against empirical obfuscation attacks, which can be easily employed without particular knowledge of the targeted systems. Then, I examine a possible strategy to build a machine learning detector that is robust against both empirical obfuscation and optimal attacks. Finally, I will show how proactive approaches can be also employed to develop systems that are not aimed at detecting malware, such as mobile fingerprinting systems. In particular, I propose a methodology to build a powerful mobile fingerprinting system, and examine possible attacks with which users might be able to evade it, thus preserving their privacy. To provide the aforementioned contributions, I co-developed (with the cooperation of the researchers at PRALab and Ruhr-Universität Bochum) various systems: a library to perform optimal attacks against machine learning systems (AdversariaLib), a framework for automatically obfuscating Android applications, a system to the robust detection of Javascript malware inside PDF files (LuxOR), a robust machine learning system to the detection of Android malware, and a system to fingerprint mobile devices. I also contributed to develop Android PRAGuard, a dataset containing a lot of empirical obfuscation attacks against the Android platform. Finally, I entirely developed Slayer NEO, an evolution of a previous system to the detection of PDF malware. The results attained by using the aforementioned tools show that it is possible to proactively build systems that predict possible evasion attacks. This suggests that a proactive approach is crucial to build systems that provide concrete security against general and evasion attacks.
Resumo:
Jones, R. A.; Breen, A. R.; Fallows, R. A.; Canals, A.; Bisi, M. M.; Lawrence, G. (2007). Interaction between coronal mass ejections and the solar wind, Journal of Geophysical Research, 112, Issue A8 RAE2008
Resumo:
Langstaff, David; Chase, T., (2007) 'A multichannel detector array with 768 pixels developed for electron spectroscopy', Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment 573(1-2) pp.169-171 RAE2008
Resumo:
Breen, Andrew; Bisi, M.M.; Fallows, R.A.; Habbal, S.R., (2007) 'Large-scale structure of the fast solar wind', Journal of Geophysical Research 112(A6) pp.A06101 RAE2008
Resumo:
A framework for the simultaneous localization and recognition of dynamic hand gestures is proposed. At the core of this framework is a dynamic space-time warping (DSTW) algorithm, that aligns a pair of query and model gestures in both space and time. For every frame of the query sequence, feature detectors generate multiple hand region candidates. Dynamic programming is then used to compute both a global matching cost, which is used to recognize the query gesture, and a warping path, which aligns the query and model sequences in time, and also finds the best hand candidate region in every query frame. The proposed framework includes translation invariant recognition of gestures, a desirable property for many HCI systems. The performance of the approach is evaluated on a dataset of hand signed digits gestured by people wearing short sleeve shirts, in front of a background containing other non-hand skin-colored objects. The algorithm simultaneously localizes the gesturing hand and recognizes the hand-signed digit. Although DSTW is illustrated in a gesture recognition setting, the proposed algorithm is a general method for matching time series, that allows for multiple candidate feature vectors to be extracted at each time step.
Resumo:
Object detection is challenging when the object class exhibits large within-class variations. In this work, we show that foreground-background classification (detection) and within-class classification of the foreground class (pose estimation) can be jointly learned in a multiplicative form of two kernel functions. One kernel measures similarity for foreground-background classification. The other kernel accounts for latent factors that control within-class variation and implicitly enables feature sharing among foreground training samples. Detector training can be accomplished via standard SVM learning. The resulting detectors are tuned to specific variations in the foreground class. They also serve to evaluate hypotheses of the foreground state. When the foreground parameters are provided in training, the detectors can also produce parameter estimate. When the foreground object masks are provided in training, the detectors can also produce object segmentation. The advantages of our method over past methods are demonstrated on data sets of human hands and vehicles.
Resumo:
A common design of an object recognition system has two steps, a detection step followed by a foreground within-class classification step. For example, consider face detection by a boosted cascade of detectors followed by face ID recognition via one-vs-all (OVA) classifiers. Another example is human detection followed by pose recognition. Although the detection step can be quite fast, the foreground within-class classification process can be slow and becomes a bottleneck. In this work, we formulate a filter-and-refine scheme, where the binary outputs of the weak classifiers in a boosted detector are used to identify a small number of candidate foreground state hypotheses quickly via Hamming distance or weighted Hamming distance. The approach is evaluated in three applications: face recognition on the FRGC V2 data set, hand shape detection and parameter estimation on a hand data set and vehicle detection and view angle estimation on a multi-view vehicle data set. On all data sets, our approach has comparable accuracy and is at least five times faster than the brute force approach.
Resumo:
The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.
Resumo:
A neural model of peripheral auditory processing is described and used to separate features of coarticulated vowels and consonants. After preprocessing of speech via a filterbank, the model splits into two parallel channels, a sustained channel and a transient channel. The sustained channel is sensitive to relatively stable parts of the speech waveform, notably synchronous properties of the vocalic portion of the stimulus it extends the dynamic range of eighth nerve filters using coincidence deteectors that combine operations of raising to a power, rectification, delay, multiplication, time averaging, and preemphasis. The transient channel is sensitive to critical features at the onsets and offsets of speech segments. It is built up from fast excitatory neurons that are modulated by slow inhibitory interneurons. These units are combined over high frequency and low frequency ranges using operations of rectification, normalization, multiplicative gating, and opponent processing. Detectors sensitive to frication and to onset or offset of stop consonants and vowels are described. Model properties are characterized by mathematical analysis and computer simulations. Neural analogs of model cells in the cochlear nucleus and inferior colliculus are noted, as are psychophysical data about perception of CV syllables that may be explained by the sustained transient channel hypothesis. The proposed sustained and transient processing seems to be an auditory analog of the sustained and transient processing that is known to occur in vision.
Resumo:
A neural network model of synchronized oscillator activity in visual cortex is presented in order to account for recent neurophysiological findings that such synchronization may reflect global properties of the stimulus. In these recent experiments, it was reported that synchronization of oscillatory firing responses to moving bar stimuli occurred not only for nearby neurons, but also occurred between neurons separated by several cortical columns (several mm of cortex) when these neurons shared some receptive field preferences specific to the stimuli. These results were obtained not only for single bar stimuli but also across two disconnected, but colinear, bars moving in the same direction. Our model and computer simulations obtain these synchrony results across both single and double bar stimuli. For the double bar case, synchronous oscillations are induced in the region between the bars, but no oscillations are induced in the regions beyond the stimuli. These results were achieved with cellular units that exhibit limit cycle oscillations for a robust range of input values, but which approach an equilibrium state when undriven. Single and double bar synchronization of these oscillators was achieved by different, but formally related, models of preattentive visual boundary segmentation and attentive visual object recognition, as well as nearest-neighbor and randomly coupled models. In preattentive visual segmentation, synchronous oscillations may reflect the binding of local feature detectors into a globally coherent grouping. In object recognition, synchronous oscillations may occur during an attentive resonant state that triggers new learning. These modelling results support earlier theoretical predictions of synchronous visual cortical oscillations and demonstrate the robustness of the mechanisms capable of generating synchrony.
Resumo:
The use of optical sensor technology for non-invasive determination of key quality pack parameters improved package/product quality. This technology can be used for optimization of packaging processes, improvement of product shelf-life and maintenance of quality. In recent years, there has been a major focus on O2 and CO2 sensor development as these are key gases used in modified atmosphere packaging (MAP) of food. The first and second experimental chapters (chapter 2 and 3) describe the development of O2, pH and CO2 solid state sensors and its (potential) use for food packaging applications. A dual-analyte sensor for dissolved O2 and pH with one bi-functional reporter dye (meso-substituted Pd- or Ptporphyrin) embedded in plasticized PVC membrane was developed in chapter 2. The developed CO2 sensor in chapter 3 was comprised of a phosphorescent reporter dye Pt(II)- tetrakis(pentafluorophenyl) porphyrin (PtTFPP) and a colourimetric pH indicator α-naphtholphthalein (NP) incorporated in a plastic matrix together with a phase transfer agent tetraoctyl- or cetyltrimethylammonium hydroxide (TOA-OH or CTA-OH). The third experimental chapter, chapter 4, described the development of liquid O2 sensors for rapid microbiological determination which are important for improvement and assurance of food safety systems. This automated screening assay produced characteristic profiles with a sharp increase in fluorescence above the baseline level at a certain threshold time (TT) which can be correlated with their initial microbial load and was applied to various raw fish and horticultural samples. Chapter 5, the fourth experimental chapter, reported upon the successful application of developed O2 and CO2 sensors for quality assessment of MAP mushrooms during storage for 7 days at 4°C.
Resumo:
Silicon (Si) is the base material for electronic technologies and is emerging as a very attractive platform for photonic integrated circuits (PICs). PICs allow optical systems to be made more compact with higher performance than discrete optical components. Applications for PICs are in the area of fibre-optic communication, biomedical devices, photovoltaics and imaging. Germanium (Ge), due to its suitable bandgap for telecommunications and its compatibility with Si technology is preferred over III-V compounds as an integrated on-chip detector at near infrared wavelengths. There are two main approaches for Ge/Si integration: through epitaxial growth and through direct wafer bonding. The lattice mismatch of ~4.2% between Ge and Si is the main problem of the former technique which leads to a high density of dislocations while the bond strength and conductivity of the interface are the main challenges of the latter. Both result in trap states which are expected to play a critical role. Understanding the physics of the interface is a key contribution of this thesis. This thesis investigates Ge/Si diodes using these two methods. The effects of interface traps on the static and dynamic performance of Ge/Si avalanche photodetectors have been modelled for the first time. The thesis outlines the original process development and characterization of mesa diodes which were fabricated by transferring a ~700 nm thick layer of p-type Ge onto n-type Si using direct wafer bonding and layer exfoliation. The effects of low temperature annealing on the device performance and on the conductivity of the interface have been investigated. It is shown that the diode ideality factor and the series resistance of the device are reduced after annealing. The carrier transport mechanism is shown to be dominated by generation–recombination before annealing and by direct tunnelling in forward bias and band-to-band tunnelling in reverse bias after annealing. The thesis presents a novel technique to realise photodetectors where one of the substrates is thinned by chemical mechanical polishing (CMP) after bonding the Si-Ge wafers. Based on this technique, Ge/Si detectors with remarkably high responsivities, in excess of 3.5 A/W at 1.55 μm at −2 V, under surface normal illumination have been measured. By performing electrical and optical measurements at various temperatures, the carrier transport through the hetero-interface is analysed by monitoring the Ge band bending from which a detailed band structure of the Ge/Si interface is proposed for the first time. The above unity responsivity of the detectors was explained by light induced potential barrier lowering at the interface. To our knowledge this is the first report of light-gated responsivity for vertically illuminated Ge/Si photodiodes. The wafer bonding approach followed by layer exfoliation or by CMP is a low temperature wafer scale process. In principle, the technique could be extended to other materials such as Ge on GaAs, or Ge on SOI. The unique results reported here are compatible with surface normal illumination and are capable of being integrated with CMOS electronics and readout units in the form of 2D arrays of detectors. One potential future application is a low-cost Si process-compatible near infrared camera.
Resumo:
The analysis of energy detector systems is a well studied topic in the literature: numerous models have been derived describing the behaviour of single and multiple antenna architectures operating in a variety of radio environments. However, in many cases of interest, these models are not in a closed form and so their evaluation requires the use of numerical methods. In general, these are computationally expensive, which can cause difficulties in certain scenarios, such as in the optimisation of device parameters on low cost hardware. The problem becomes acute in situations where the signal to noise ratio is small and reliable detection is to be ensured or where the number of samples of the received signal is large. Furthermore, due to the analytic complexity of the models, further insight into the behaviour of various system parameters of interest is not readily apparent. In this thesis, an approximation based approach is taken towards the analysis of such systems. By focusing on the situations where exact analyses become complicated, and making a small number of astute simplifications to the underlying mathematical models, it is possible to derive novel, accurate and compact descriptions of system behaviour. Approximations are derived for the analysis of energy detectors with single and multiple antennae operating on additive white Gaussian noise (AWGN) and independent and identically distributed Rayleigh, Nakagami-m and Rice channels; in the multiple antenna case, approximations are derived for systems with maximal ratio combiner (MRC), equal gain combiner (EGC) and square law combiner (SLC) diversity. In each case, error bounds are derived describing the maximum error resulting from the use of the approximations. In addition, it is demonstrated that the derived approximations require fewer computations of simple functions than any of the exact models available in the literature. Consequently, the regions of applicability of the approximations directly complement the regions of applicability of the available exact models. Further novel approximations for other system parameters of interest, such as sample complexity, minimum detectable signal to noise ratio and diversity gain, are also derived. In the course of the analysis, a novel theorem describing the convergence of the chi square, noncentral chi square and gamma distributions towards the normal distribution is derived. The theorem describes a tight upper bound on the error resulting from the application of the central limit theorem to random variables of the aforementioned distributions and gives a much better description of the resulting error than existing Berry-Esseen type bounds. A second novel theorem, providing an upper bound on the maximum error resulting from the use of the central limit theorem to approximate the noncentral chi square distribution where the noncentrality parameter is a multiple of the number of degrees of freedom, is also derived.
Resumo:
PURPOSE: The purpose of this work is to improve the noise power spectrum (NPS), and thus the detective quantum efficiency (DQE), of computed radiography (CR) images by correcting for spatial gain variations specific to individual imaging plates. CR devices have not traditionally employed gain-map corrections, unlike the case with flat-panel detectors, because of the multiplicity of plates used with each reader. The lack of gain-map correction has limited the DQE(f) at higher exposures with CR. This current work describes a feasible solution to generating plate-specific gain maps. METHODS: Ten high-exposure open field images were taken with an RQA5 spectrum, using a sixth generation CR plate suspended in air without a cassette. Image values were converted to exposure, the plates registered using fiducial dots on the plate, the ten images averaged, and then high-pass filtered to remove low frequency contributions from field inhomogeneity. A gain-map was then produced by converting all pixel values in the average into fractions with mean of one. The resultant gain-map of the plate was used to normalize subsequent single images to correct for spatial gain fluctuation. To validate performance, the normalized NPS (NNPS) for all images was calculated both with and without the gain-map correction. Variations in the quality of correction due to exposure levels, beam voltage/spectrum, CR reader used, and registration were investigated. RESULTS: The NNPS with plate-specific gain-map correction showed improvement over the noncorrected case over the range of frequencies from 0.15 to 2.5 mm(-1). At high exposure (40 mR), NNPS was 50%-90% better with gain-map correction than without. A small further improvement in NNPS was seen from carefully registering the gain-map with subsequent images using small fiducial dots, because of slight misregistration during scanning. Further improvement was seen in the NNPS from scaling the gain map about the mean to account for different beam spectra. CONCLUSIONS: This study demonstrates that a simple gain-map can be used to correct for the fixed-pattern noise in a given plate and thus improve the DQE of CR imaging. Such a method could easily be implemented by manufacturers because each plate has a unique bar code and the gain-map for all plates associated with a reader could be stored for future retrieval. These experiments indicated that an improvement in NPS (and hence, DQE) is possible, depending on exposure level, over a wide range of frequencies with this technique.
Resumo:
Droplet-based digital microfluidics technology has now come of age, and software-controlled biochips for healthcare applications are starting to emerge. However, today's digital microfluidic biochips suffer from the drawback that there is no feedback to the control software from the underlying hardware platform. Due to the lack of precision inherent in biochemical experiments, errors are likely during droplet manipulation; error recovery based on the repetition of experiments leads to wastage of expensive reagents and hard-to-prepare samples. By exploiting recent advances in the integration of optical detectors (sensors) into a digital microfluidics biochip, we present a physical-aware system reconfiguration technique that uses sensor data at intermediate checkpoints to dynamically reconfigure the biochip. A cyberphysical resynthesis technique is used to recompute electrode-actuation sequences, thereby deriving new schedules, module placement, and droplet routing pathways, with minimum impact on the time-to-response. © 2012 IEEE.