961 resultados para digital devices
Resumo:
At head of title: S-109.
Resumo:
Cybercrime and related malicious activity in our increasingly digital world has become more prevalent and sophisticated, evading traditional security mechanisms. Digital forensics has been proposed to help investigate, understand and eventually mitigate such attacks. The practice of digital forensics, however, is still fraught with various challenges. Some of the most prominent of these challenges include the increasing amounts of data and the diversity of digital evidence sources appearing in digital investigations. Mobile devices and cloud infrastructures are an interesting specimen, as they inherently exhibit these challenging circumstances and are becoming more prevalent in digital investigations today. Additionally they embody further characteristics such as large volumes of data from multiple sources, dynamic sharing of resources, limited individual device capabilities and the presence of sensitive data. These combined set of circumstances make digital investigations in mobile and cloud environments particularly challenging. This is not aided by the fact that digital forensics today still involves manual, time consuming tasks within the processes of identifying evidence, performing evidence acquisition and correlating multiple diverse sources of evidence in the analysis phase. Furthermore, industry standard tools developed are largely evidence-oriented, have limited support for evidence integration and only automate certain precursory tasks, such as indexing and text searching. In this study, efficiency, in the form of reducing the time and human labour effort expended, is sought after in digital investigations in highly networked environments through the automation of certain activities in the digital forensic process. To this end requirements are outlined and an architecture designed for an automated system that performs digital forensics in highly networked mobile and cloud environments. Part of the remote evidence acquisition activity of this architecture is built and tested on several mobile devices in terms of speed and reliability. A method for integrating multiple diverse evidence sources in an automated manner, supporting correlation and automated reasoning is developed and tested. Finally the proposed architecture is reviewed and enhancements proposed in order to further automate the architecture by introducing decentralization particularly within the storage and processing functionality. This decentralization also improves machine to machine communication supporting several digital investigation processes enabled by the architecture through harnessing the properties of various peer-to-peer overlays. Remote evidence acquisition helps to improve the efficiency (time and effort involved) in digital investigations by removing the need for proximity to the evidence. Experiments show that a single TCP connection client-server paradigm does not offer the required scalability and reliability for remote evidence acquisition and that a multi-TCP connection paradigm is required. The automated integration, correlation and reasoning on multiple diverse evidence sources demonstrated in the experiments improves speed and reduces the human effort needed in the analysis phase by removing the need for time-consuming manual correlation. Finally, informed by published scientific literature, the proposed enhancements for further decentralizing the Live Evidence Information Aggregator (LEIA) architecture offer a platform for increased machine-to-machine communication thereby enabling automation and reducing the need for manual human intervention.
Resumo:
Communications devices for government or military applications must keep data secure, even when their electronic components fail. Combining information flow and risk analyses could make fault-mode evaluations for such devices more efficient and cost-effective.
Resumo:
Oggi, i dispositivi portatili sono diventati la forza trainante del mercato consumer e nuove sfide stanno emergendo per aumentarne le prestazioni, pur mantenendo un ragionevole tempo di vita della batteria. Il dominio digitale è la miglior soluzione per realizzare funzioni di elaborazione del segnale, grazie alla scalabilità della tecnologia CMOS, che spinge verso l'integrazione a livello sub-micrometrico. Infatti, la riduzione della tensione di alimentazione introduce limitazioni severe per raggiungere un range dinamico accettabile nel dominio analogico. Minori costi, minore consumo di potenza, maggiore resa e una maggiore riconfigurabilità sono i principali vantaggi dell'elaborazione dei segnali nel dominio digitale. Da più di un decennio, diverse funzioni puramente analogiche sono state spostate nel dominio digitale. Ciò significa che i convertitori analogico-digitali (ADC) stanno diventando i componenti chiave in molti sistemi elettronici. Essi sono, infatti, il ponte tra il mondo digitale e analogico e, di conseguenza, la loro efficienza e la precisione spesso determinano le prestazioni globali del sistema. I convertitori Sigma-Delta sono il blocco chiave come interfaccia in circuiti a segnale-misto ad elevata risoluzione e basso consumo di potenza. I tools di modellazione e simulazione sono strumenti efficaci ed essenziali nel flusso di progettazione. Sebbene le simulazioni a livello transistor danno risultati più precisi ed accurati, questo metodo è estremamente lungo a causa della natura a sovracampionamento di questo tipo di convertitore. Per questo motivo i modelli comportamentali di alto livello del modulatore sono essenziali per il progettista per realizzare simulazioni veloci che consentono di identificare le specifiche necessarie al convertitore per ottenere le prestazioni richieste. Obiettivo di questa tesi è la modellazione del comportamento del modulatore Sigma-Delta, tenendo conto di diverse non idealità come le dinamiche dell'integratore e il suo rumore termico. Risultati di simulazioni a livello transistor e dati sperimentali dimostrano che il modello proposto è preciso ed accurato rispetto alle simulazioni comportamentali.
Resumo:
Trauma and damage to the delicate structures of the inner ear frequently occurs during insertion of electrode array into the cochlea. This is strongly related to the excessive manual insertion force of the surgeon without any tool/tissue interaction feedback. The research is examined tool-tissue interaction of large prototype scale (12.5:1) digit embedded with distributive tactile sensor based upon cochlear electrode and large prototype scale (4.5:1) cochlea phantom for simulating the human cochlear which could lead to small scale digit requirements. This flexible digit classified the tactile information from the digit-phantom interaction such as contact status, tip penetration, obstacles, relative shape and location, contact orientation and multiple contacts. The digit, distributive tactile sensors embedded with silicon-substrate is inserted into the cochlea phantom to measure any digit/phantom interaction and position of the digit in order to minimize tissue and trauma damage during the electrode cochlear insertion. The digit is pre-curved in cochlea shape so that the digit better conforms to the shape of the scala tympani to lightly hug the modiolar wall of a scala. The digit have provided information on the characteristics of touch, digit-phantom interaction during the digit insertion. The tests demonstrated that even devices of such a relative simple design with low cost have potential to improve cochlear implants surgery and other lumen mapping applications by providing tactile feedback information by controlling the insertion through sensing and control of the tip of the implant during the insertion. In that approach, the surgeon could minimize the tissue damage and potential damage to the delicate structures within the cochlear caused by current manual electrode insertion of the cochlear implantation. This approach also can be applied diagnosis and path navigation procedures. The digit is a large scale stage and could be miniaturized in future to include more realistic surgical procedures.
Resumo:
Image collections are ever growing and hence visual information is becoming more and more important. Moreover, the classical paradigm of taking pictures has changed, first with the spread of digital cameras and, more recently, with mobile devices equipped with integrated cameras. Clearly, these image repositories need to be managed, and tools for effectively and efficiently searching image databases are highly sought after, especially on mobile devices where more and more images are being stored. In this paper, we present an image browsing system for interactive exploration of image collections on mobile devices. Images are arranged so that visually similar images are grouped together while large image repositories become accessible through a hierarchical, browsable tree structure, arranged on a hexagonal lattice. The developed system provides an intuitive and fast interface for navigating through image databases using a variety of touch gestures. © 2012 Springer-Verlag.
Resumo:
All-optical signal processing is a powerful tool for the processing of communication signals and optical network applications have been routinely considered since the inception of optical communication. There are many successful optical devices deployed in today’s communication networks, including optical amplification, dispersion compensation, optical cross connects and reconfigurable add drop multiplexers. However, despite record breaking performance, all-optical signal processing devices have struggled to find a viable market niche. This has been mainly due to competition from electro-optic alternatives, either from detailed performance analysis or more usually due to the limited market opportunity for a mid-link device. For example a wavelength converter would compete with a reconfigured transponder which has an additional market as an actual transponder enabling significantly more economical development. Never-the-less, the potential performance of all-optical devices is enticing. Motivated by their prospects of eventual deployment, in this chapter we analyse the performance and energy consumption of digital coherent transponders, linear coherent repeaters and modulator based pulse shaping/frequency conversion, setting a benchmark for the proposed all-optical implementations.
Resumo:
A számítógépes munkavégzés elterjedése számos szempontból és módszerrel vizsgálható. A szerző kutatásai során az ember-gép-környezet összhang megteremtésének igényéből kiindulva arra keresi a választ, hogy milyen tényezők befolyásolják a számítógéppel végzett tevékenységek megszervezésének és végrehajtásának hatékonyságát. Tanulmányában a kutatás indítófázisának főbb eredményeiről, a számítógép-használati szokásokról ad áttekintést, ami elengedhetetlen az irodai-adminisztratív jellegű tevékenységekhez kötődő kritikus tényezők feltárásához, továbbá a mérési és fejlesztési feladatok megalapozásához. A szűkebb értelemben vett ergonómiai szempontok mellett a digitális kompetenciák kérdéskörét vonta be a munkába, amit releváns kérdésnek tart a hatékonyság mérése szempontjából, mivel a számítógép megválasztása és a munkahely kialakítása nem értékelhető az emberi tényező alkalmassága és az elvégzendő feladat tartalma nélkül. __________ Office and administrative work, business corres-pondence, private contacts and learning are increasingly supported by computers. Moreover the technical possibilities of correspondence are wider than using a PC. It is accessible on the go by a cell phone. The author analysed the characteristics of the used devices, the working environment, satisfaction factors in connection with computer work and the digital competence by a survey. In his opinion development in an ergonomic approach is important not only to establish the technological novelties but to utilize the present possibilities of hardware and environment. The reason for this is that many people can not (or do not want) to follow the dynamic technological development of computers with buying the newest devices. The study compares the results of home and work characteristic of computer work. This research was carried out as part of the “TAMOP-4.2.1.B-10/2/ KONV-2010-0001” project with support by the European Union, co-financed by the European Social Fund.
Resumo:
The current mobile networks don't offer sufficient data rates to support multimedia intensive applications in development for multifunctional mobile devices. Ultra wideband (UWB) wireless technology is being considered as the solution to overcome data rate bottlenecks in the current mobile networks. UWB is able to achieve such high data transmission rates because it transmits data over a very large chunk of the frequency spectrum. As currently approved by the U.S. Federal Communication Commission it utilizes 7.5 GHz of spectrum between 3.1 GHz and 10.6 GHz. ^ Successful transmission and reception of information data using UWB wireless technology in mobile devices, requires an antenna that has linear phase, low dispersion and a voltage standing wave ratio (VSWR) ≤ 2 throughout the entire frequency band. Compatibility with an integrated circuit requires an unobtrusive and electrically small design. The previous techniques that have been used to optimize the performance of UWB wireless systems, involve proper design of source pulses for optimal UWB performance. The goal of this work is directed towards the designing of antennas for personal communication devices, with optimal UWB bandwidth performance. Several techniques are proposed for optimal UWB bandwidth performance of the UWB antenna designs in this Ph.D. dissertation. ^ This Ph.D. dissertation presents novel UWB antenna designs for personal communication devices that have been characterized and optimized using the finite difference time domain (FDTD) technique. The antenna designs reported in this research are physically compact, planar for low profile use, with sufficient impedance bandwidth (>20%), antenna input impedance of 50-Ω, and an omni-directional (±1.5 dB) radiation pattern in the operating bandwidth. ^
Resumo:
This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.
Resumo:
Currently the data storage industry is facing huge challenges with respect to the conventional method of recording data known as longitudinal magnetic recording. This technology is fast approaching a fundamental physical limit, known as the superparamagnetic limit. A unique way of deferring the superparamagnetic limit incorporates the patterning of magnetic media. This method exploits the use of lithography tools to predetermine the areal density. Various nanofabrication schemes are employed to pattern the magnetic material are Focus Ion Beam (FIB), E-beam Lithography (EBL), UV-Optical Lithography (UVL), Self-assembled Media Synthesis and Nanoimprint Lithography (NIL). Although there are many challenges to manufacturing patterned media, the large potential gains offered in terms of areal density make it one of the most promising new technologies on the horizon for future hard disk drives. Thus, this dissertation contributes to the development of future alternative data storage devices and deferring the superparamagnetic limit by designing and characterizing patterned magnetic media using a novel nanoimprint replication process called "Step and Flash Imprint lithography". As opposed to hot embossing and other high temperature-low pressure processes, SFIL can be performed at low pressure and room temperature. Initial experiments carried out, consisted of process flow design for the patterned structures on sputtered Ni-Fe thin films. The main one being the defectivity analysis for the SFIL process conducted by fabricating and testing devices of varying feature sizes (50 nm to 1 μm) and inspecting them optically as well as testing them electrically. Once the SFIL process was optimized, a number of Ni-Fe coated wafers were imprinted with a template having the patterned topography. A minimum feature size of 40 nm was obtained with varying pitch (1:1, 1:1.5, 1:2, and 1:3). The Characterization steps involved extensive SEM study at each processing step as well as Atomic Force Microscopy (AFM) and Magnetic Force Microscopy (MFM) analysis.
Resumo:
The study of transport processes in low-dimensional semiconductors requires a rigorous quantum mechanical treatment. However, a full-fledged quantum transport theory of electrons (or holes) in semiconductors of small scale, applicable in the presence of external fields of arbitrary strength, is still not available. In the literature, different approaches have been proposed, including: (a) the semiclassical Boltzmann equation, (b) perturbation theory based on Keldysh's Green functions, and (c) the Quantum Boltzmann Equation (QBE), previously derived by Van Vliet and coworkers, applicable in the realm of Kubo's Linear Response Theory (LRT). ^ In the present work, we follow the method originally proposed by Van Wet in LRT. The Hamiltonian in this approach is of the form: H = H 0(E, B) + λV, where H0 contains the externally applied fields, and λV includes many-body interactions. This Hamiltonian differs from the LRT Hamiltonian, H = H0 - AF(t) + λV, which contains the external field in the field-response part, -AF(t). For the nonlinear problem, the eigenfunctions of the system Hamiltonian, H0(E, B), include the external fields without any limitation on strength. ^ In Part A of this dissertation, both the diagonal and nondiagonal Master equations are obtained after applying projection operators to the von Neumann equation for the density operator in the interaction picture, and taking the Van Hove limit, (λ → 0, t → ∞, so that (λ2 t)n remains finite). Similarly, the many-body current operator J is obtained from the Heisenberg equation of motion. ^ In Part B, the Quantum Boltzmann Equation is obtained in the occupation-number representation for an electron gas, interacting with phonons or impurities. On the one-body level, the current operator obtained in Part A leads to the Generalized Calecki current for electric and magnetic fields of arbitrary strength. Furthermore, in this part, the LRT results for the current and conductance are recovered in the limit of small electric fields. ^ In Part C, we apply the above results to the study of both linear and nonlinear longitudinal magneto-conductance in quasi one-dimensional quantum wires (1D QW). We have thus been able to quantitatively explain the experimental results, recently published by C. Brick, et al., on these novel frontier-type devices. ^
Resumo:
Electronic noise has been investigated in AlxGa1−x N/GaN Modulation-Doped Field Effect Transistors (MODFETs) of submicron dimensions, grown for us by MBE (Molecular Beam Epitaxy) techniques at Virginia Commonwealth University by Dr. H. Morkoç and coworkers. Some 20 devices were grown on a GaN substrate, four of which have leads bonded to source (S), drain (D), and gate (G) pads, respectively. Conduction takes place in the quasi-2D layer of the junction (xy plane) which is perpendicular to the quantum well (z-direction) of average triangular width ∼3 nm. A non-doped intrinsic buffer layer of ∼5 nm separates the Si-doped donors in the AlxGa1−xN layer from the 2D-transistor plane, which affords a very high electron mobility, thus enabling high-speed devices. Since all contacts (S, D, and G) must reach through the AlxGa1−xN layer to connect internally to the 2D plane, parallel conduction through this layer is a feature of all modulation-doped devices. While the shunting effect may account for no more than a few percent of the current IDS, it is responsible for most excess noise, over and above thermal noise of the device. ^ The excess noise has been analyzed as a sum of Lorentzian spectra and 1/f noise. The Lorentzian noise has been ascribed to trapping of the carriers in the AlxGa1−xN layer. A detailed, multitrapping generation-recombination noise theory is presented, which shows that an exponential relationship exists for the time constants obtained from the spectral components as a function of 1/kT. The trap depths have been obtained from Arrhenius plots of log (τT2) vs. 1000/T. Comparison with previous noise results for GaAs devices shows that: (a) many more trapping levels are present in these nitride-based devices; (b) the traps are deeper (farther below the conduction band) than for GaAs. Furthermore, the magnitude of the noise is strongly dependent on the level of depletion of the AlxGa1−xN donor layer, which can be altered by a negative or positive gate bias VGS. ^ Altogether, these frontier nitride-based devices are promising for bluish light optoelectronic devices and lasers; however, the noise, though well understood, indicates that the purity of the constituent layers should be greatly improved for future technological applications. ^
Resumo:
Compact thermal-fluid systems are found in many industries from aerospace to microelectronics where a combination of small size, light weight, and high surface area to volume ratio fluid networks are necessary. These devices are typically designed with fluid networks consisting of many small parallel channels that effectively pack a large amount of heat transfer surface area in a very small volume but do so at the cost of increased pumping power requirements. ^ To offset this cost the use of a branching fluid network for the distribution of coolant within a heat sink is investigated. The goal of the branch design technique is to minimize the entropy generation associated with the combination of viscous dissipation and convection heat transfer experienced by the coolant in the heat sink while maintaining compact high heat transfer surface area to volume ratios. ^ The derivation of Murray's Law, originally developed to predict the geometry of physiological transport systems, is extended to heat sink designs which minimze entropy generation. Two heat sink designs at different scales are built, and tested experimentally and analytically. The first uses this new derivation of Murray's Law. The second uses a combination of Murray's Law and Constructal Theory. The results of the experiments were used to verify the analytical and numerical models. These models were then used to compare the performance of the heat sink with other compact high performance heat sink designs. The results showed that the techniques used to design branching fluid networks significantly improves the performance of active heat sinks. The design experience gained was then used to develop a set of geometric relations which optimize the heat transfer to pumping power ratio of a single cooling channel element. Each element can be connected together using a set of derived geometric guidelines which govern branch diameters and angles. The methodology can be used to design branching fluid networks which can fit any geometry. ^