7 resultados para Digital electronics

em DRUM (Digital Repository at the University of Maryland)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this dissertation was to investigate flexible polymer-nanoparticle composites with unique magnetic and electrical properties. Toward this goal, two distinct projects were carried out. The first project explored the magneto-dielectric properties and morphology of flexible polymer-nanoparticle composites that possess high permeability (µ), high permittivity (ε) and minimal dielectric, and magnetic loss (tan δε, tan δµ). The main materials challenges were the synthesis of magnetic nanoparticle fillers displaying high saturation magnetization (Ms), limited coercivity, and their homogeneous dispersion in a polymeric matrix. Nanostructured magnetic fillers including polycrystalline iron core-shell nanoparticles, and constructively assembled superparamagnetic iron oxide nanoparticles were synthesized, and dispersed uniformly in an elastomer matrix to minimize conductive losses. The resulting composites have demonstrated promising permittivity (22.3), permeability (3), and sustained low dielectric (0.1), magnetic (0.4) loss for frequencies below 2 GHz. This study demonstrated nanocomposites with tunable magnetic resonance frequency, which can be used to develop compact and flexible radio frequency devices with high efficiency. The second project focused on fundamental research regarding methods for the design of highly conductive polymer-nanoparticle composites that can maintain high electrical conductivity under tensile strain exceeding 100%. We investigated a simple solution spraying method to fabricate stretchable conductors based on elastomeric block copolymer fibers and silver nanoparticles. Silver nanoparticles were assembled both in and around block copolymer fibers forming interconnected dual nanoparticle networks, resulting in both in-fiber conductive pathways and additional conductive pathways on the outer surface of the fibers. Stretchable composites with conductivity values reaching 9000 S/cm maintained 56% of their initial conductivity after 500 cycles at 100% strain. The developed manufacturing method in this research could pave the way towards direct deposition of flexible electronic devices on any shaped substrate. The electrical and electromechanical properties of these dual silver nanoparticle network composites make them promising materials for the future construction of stretchable circuitry for displays, solar cells, antennas, and strain and tactility sensors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

MOVE is a composition for string quartet, piano, percussion and electronics of approximately 15-16 minutes duration in three movements. The work incorporates electronic samples either synthesized electronically by the composer or recorded from acoustic instruments. The work aims to use electronic sounds as an expansion of the tonal palette of the chamber group (rather like an extended percussion setup) as opposed to a dominating sonic feature of the music. This is done by limiting the use of electronics to specific sections of the work, and by prioritizing blend and sonic coherence in the synthesized samples. The work uses fixed electronics in such a way that allows for tempo variations in the music. Generally, a difficulty arises in that fixed “tape” parts don’t allow tempo variations; while truly “live” software algorithms sacrifice rhythmic accuracy. Sample pads, such as the Roland SPD-SX, provide an elegant solution. The latency of such a device is close enough to zero that individual samples can be triggered in real time at a range of tempi. The percussion setup in this work (vibraphone and sample pad) allows one player to cover both parts, eliminating the need for an external musician to trigger the electronics. Compositionally, momentum is used as a constructing principle. The first movement makes prominent use of ostinato and shifting meter. The second is a set of variations on a repeated harmonic pattern, with a polymetric middle section. The third is a type of passacaglia, wherein the bassline is not introduced right away, but becomes more significant later in the movement. Given the importance of visual presentation in the Internet age, the final goal of the project was to shoot HD video of a studio performance of the work for publication online. The composer recorded audio and video in two separate sessions and edited the production using Logic X and Adobe Premiere Pro. The final video presentation can be seen at geoffsheil.com/move.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent popularity of the IEEE 802.11b Wireless Local Area Networks (WLANs) in a host of current-day applications has instigated a suite of research challenges. The 802.11b WLANs are highly reliable and wide spread. In this work, we study the temporal characteristics of RSSI in the real-working environment by conducting a controlled set of experiments. Our results indicate that a significant variability in the RSSI can occur over time. Some of this variability in the RSSI may be due to systematic causes while the other component can be expressed as stochastic noise. We present an analysis of both these aspects of RSSI. We treat the moving average of the RSSI as the systematic causes and the noise as the stochastic causes. We give a reasonable estimate for the moving average to compute the noise accurately. We attribute the changes in the environment such as the movement of people and the noise associated with the NIC circuitry and the network access point as causes for this variability. We find that the results of our analysis are of primary importance to active research areas such as location determination of users in a WLAN. The techniques used in some of the RF-based WLAN location determination systems, exploit the characteristics of the RSSI presented in this work to infer the location of a wireless client in a WLAN. Thus our results form the building blocks for other users of the exact characteristics of the RSSI.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Widespread adoption of lead-free materials and processing for printed circuit board (PCB) assembly has raised reliability concerns regarding surface insulation resistance (SIR) degradation and electrochemical migration (ECM). As PCB conductor spacings decrease, electronic products become more susceptible to these failures mechanisms, especially in the presence of surface contamination and flux residues which might remain after no-clean processing. Moreover, the probability of failure due to SIR degradation and ECM is affected by the interaction between physical factors (such as temperature, relative humidity, electric field) and chemical factors (such as solder alloy, substrate material, no-clean processing). Current industry standards for assessing SIR reliability are designed to serve as short-term qualification tests, typically lasting 72 to 168 hours, and do not provide a prediction of reliability in long-term applications. The risk of electrochemical migration with lead-free assemblies has not been adequately investigated. Furthermore, the mechanism of electrochemical migration is not completely understood. For example, the role of path formation has not been discussed in previous studies. Another issue is that there are very few studies on development of rapid assessment methodologies for characterizing materials such as solder flux with respect to their potential for promoting ECM. In this dissertation, the following research accomplishments are described: 1). Long-term temp-humidity-bias (THB) testing over 8,000 hours assessing the reliability of printed circuit boards processed with a variety of lead-free solder pastes, solder pad finishes, and substrates. 2). Identification of silver migration from Sn3.5Ag and Sn3.0Ag0.5Cu lead-free solder, which is a completely new finding compared with previous research. 3). Established the role of path formation as a step in the ECM process, and provided clarification of the sequence of individual steps in the mechanism of ECM: path formation, electrodeposition, ion transport, electrodeposition, and filament formation. 4). Developed appropriate accelerated testing conditions for assessing the no-clean processed PCBs' susceptibility to ECM: a). Conductor spacings in test structures should be reduced in order to reflect the trend of higher density electronics and the effect of path formation, independent of electric field, on the time-to-failure. b). THB testing temperatures should be modified according to the material present on the PCB, since testing at 85oC can cause the evaporation of weak organic acids (WOAs) in the flux residues, leading one to underestimate the risk of ECM. 5). Correlated temp-humidity-bias testing with ion chromatography analysis and potentiostat measurement to develop an efficient and effective assessment methodology to characterize the effect of no-clean processing on ECM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the growing demand for high-speed and high-quality short-range communication, multi-band orthogonal frequency division multiplexing ultra-wide band (MB-OFDM UWB) systems have recently garnered considerable interest in industry and in academia. To achieve a low-cost solution, highly integrated transceivers with small die area and minimum power consumption are required. The key building block of the transceiver is the frequency synthesizer. A frequency synthesizer comprised of two PLLs and one multiplexer is presented in this thesis. Ring oscillators are adopted for PLL implementation in order to drastically reduce the die area of the frequency synthesizer. The poor spectral purity appearing in the frequency synthesizers involving mixers is greatly improved in this design. Based on the specifications derived from application standards, a design methodology is presented to obtain the parameters of building blocks. As well, the simulation results are provided to verify the performance of proposed design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using scientific methods in the humanities is at the forefront of objective literary analysis. However, processing big data is particularly complex when the subject matter is qualitative rather than numerical. Large volumes of text require specialized tools to produce quantifiable data from ideas and sentiments. Our team researched the extent to which tools such as Weka and MALLET can test hypotheses about qualitative information. We examined the claim that literary commentary exists within political environments and used US periodical articles concerning Russian literature in the early twentieth century as a case study. These tools generated useful quantitative data that allowed us to run stepwise binary logistic regressions. These statistical tests allowed for time series experiments using sea change and emergency models of history, as well as classification experiments with regard to author characteristics, social issues, and sentiment expressed. Both types of experiments supported our claim with varying degrees, but more importantly served as a definitive demonstration that digitally enhanced quantitative forms of analysis can apply to qualitative data. Our findings set the foundation for further experiments in the emerging field of digital humanities.