127 resultados para Statistics|Economics, Finance|Engineering, Electronics and Electrical|Physics, General


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to design a preventive scheme using directional antennas to improve the performance of mobile ad hoc networks. In this dissertation, a novel Directionality based Preventive Link Maintenance (DPLM) Scheme is proposed to characterize the performance gain [JaY06a, JaY06b, JCY06] by extending the life of link. In order to maintain the link and take preventive action, signal strength of data packets is measured. Moreover, location information or angle of arrival information is collected during communication and saved in the table. When measured signal strength is below orientation threshold , an orientation warning is generated towards the previous hop node. Once orientation warning is received by previous hop (adjacent) node, it verifies the correctness of orientation warning with few hello pings and initiates high quality directional link (a link above the threshold) and immediately switches to it, avoiding a link break altogether. The location information is utilized to create a directional link by orienting neighboring nodes antennas towards each other. We call this operation an orientation handoff, which is similar to soft-handoff in cellular networks. ^ Signal strength is the indicating factor, which represents the health of the link and helps to predict the link failure. In other words, link breakage happens due to node movement and subsequently reducing signal strength of receiving packets. DPLM scheme helps ad hoc networks to avoid or postpone costly operation of route rediscovery in on-demand routing protocols by taking above-mentioned preventive action. ^ This dissertation advocates close but simple collaboration between the routing, medium access control and physical layers. In order to extend the link, the Dynamic Source Routing (DSR) and IEEE 802.11 MAC protocols were modified to use the ability of directional antennas to transmit over longer distance. A directional antenna module is implemented in OPNET simulator with two separate modes of operations: omnidirectional and directional. The antenna module has been incorporated in wireless node model and simulations are performed to characterize the performance improvement of mobile ad hoc networks. Extensive simulations have shown that without affecting the behavior of the routing protocol noticeably, aggregate throughput, packet delivery ratio, end-to-end delay (latency), routing overhead, number of data packets dropped, and number of path breaks are improved considerably. We have done the analysis of the results in different scenarios to evaluate that the use of directional antennas with proposed DPLM scheme has been found promising to improve the performance of mobile ad hoc networks. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Shipboard power systems have different characteristics than the utility power systems. In the Shipboard power system it is crucial that the systems and equipment work at their peak performance levels. One of the most demanding aspects for simulations of the Shipboard Power Systems is to connect the device under test to a real-time simulated dynamic equivalent and in an environment with actual hardware in the Loop (HIL). The real time simulations can be achieved by using multi-distributed modeling concept, in which the global system model is distributed over several processors through a communication link. The advantage of this approach is that it permits the gradual change from pure simulation to actual application. In order to perform system studies in such an environment physical phase variable models of different components of the shipboard power system were developed using operational parameters obtained from finite element (FE) analysis. These models were developed for two types of studies low and high frequency studies. Low frequency studies are used to examine the shipboard power systems behavior under load switching, and faults. High-frequency studies were used to predict abnormal conditions due to overvoltage, and components harmonic behavior. Different experiments were conducted to validate the developed models. The Simulation and experiment results show excellent agreement. The shipboard power systems components behavior under internal faults was investigated using FE analysis. This developed technique is very curial in the Shipboard power systems faults detection due to the lack of comprehensive fault test databases. A wavelet based methodology for feature extraction of the shipboard power systems current signals was developed for harmonic and fault diagnosis studies. This modeling methodology can be utilized to evaluate and predicate the NPS components future behavior in the design stage which will reduce the development cycles, cut overall cost, prevent failures, and test each subsystem exhaustively before integrating it into the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A high frequency physical phase variable electric machine model was developed using FE analysis. The model was implemented in a machine drive environment with hardware-in-the-loop. The novelty of the proposed model is that it is derived based on the actual geometrical and other physical information of the motor, considering each individual turn in the winding. This is the first attempt to develop such a model to obtain high frequency machine parameters without resorting to expensive experimental procedures currently in use. The model was used in a dynamic simulation environment to predict inverter-motor interaction. This includes motor terminal overvoltage, current spikes, as well as switching effects. In addition, a complete drive model was developed for electromagnetic interference (EMI) analysis and evaluation. This consists of the lumped parameter models of different system components, such as cable, inverter, and motor. The lumped parameter models enable faster simulations. The results obtained were verified by experimental measurements and excellent agreements were obtained. A change in the winding arrangement and its influence on the motor high frequency behavior has also been investigated. This was shown to have a little effect on the parameter values and in the motor high frequency behavior for equal number of turns. An accurate prediction of overvoltage and EMI in the design stages of the drive system would reduce the time required for the design modifications as well as for the evaluation of EMC compliance issues. The model can be utilized in the design optimization and insulation selection for motors. Use of this procedure could prove economical, as it would help designers develop and test new motor designs for the evaluation of operational impacts in various motor drive applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Next generation networks are characterized by ever increasing complexity, intelligence, heterogeneous technologies and increasing user expectations. Telecommunication networks in particular have become truly global, consisting of a variety of national and regional networks, both wired and wireless. Consequently, the management of telecommunication networks is becoming increasingly complex. In addition, network security and reliability requirements require additional overheads which increase the size of the data records. This in turn causes acute network traffic congestions. There is no single network management methodology to control the various requirements of today's networks, and provides a good level of Quality of Service (QoS), and network security. Therefore, an integrated approach is needed in which a combination of methodologies can provide solutions and answers to network events (which cause severe congestions and compromise the quality of service and security). The proposed solution focused on a systematic approach to design a network management system based upon the recent advances in the mobile agent technologies. This solution has provided a new traffic management system for telecommunication networks that is capable of (1) reducing the network traffic load (thus reducing traffic congestion), (2) overcoming existing network latency, (3) adapting dynamically to the traffic load of the system, (4) operating in heterogeneous environments with improved security, and (5) having robust and fault tolerance behavior. This solution has solved several key challenges in the development of network management for telecommunication networks using mobile agents. We have designed several types of agents, whose interactions will allow performing some complex management actions, and integrating them. Our solution is decentralized to eliminate excessive bandwidth usage and at the same time has extended the capabilities of the Simple Network Management Protocol (SNMP). Our solution is fully compatible with the existing standards.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this investigation was to develop new techniques to generate segmental assessments of body composition based on Segmental Bioelectrical Impedance Analysis (SBIA). An equally important consideration was the design, simulation, development, and the software and hardware integration of the SBIA system. This integration was carried out with a Very Large Scale Integration (VLSI) Field Programmable Gate Array (FPGA) microcontroller that analyzed the measurements obtained from segments of the body, and provided full body and segmental Fat Free Mass (FFM) and Fat Mass (FM) percentages. Also, the issues related to the estimate of the body's composition in persons with spinal cord injury (SCI) were addressed and investigated. This investigation demonstrated that the SBIA methodology provided accurate segmental body composition measurements. Disabled individuals are expected to benefit from these SBIA evaluations, as they are non-invasive methods, suitable for paralyzed individuals. The SBIA VLSI system may replace bulky, non flexible electronic modules attached to human bodies. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation presents a unique research opportunity by using recordings which provide electrocardiogram (ECG) plus a reference breathing signal (RBS). ECG derived breathing (EDR) is measured and correlated against RBS. Standard deviations of multiresolution wavelet analysis coefficients (SDMW) are obtained from heart rate and classified using RBS. Prior works by others used select patients for sleep apnea scoring with EDR but no RBS. Another prior work classified select heart disease patients with SDMW but no RBS. This study used randomly chosen sleep disorder patient recordings; central and obstructive apneas, with and without heart disease.^ Implementation required creating an application because existing systems were limited in power and scope. A review survey was created to choose a development environment. The survey is presented as a learning tool and teaching resource. Development objectives were rapid development using limited resources (manpower and money). Open Source resources were used exclusively for implementation. ^ Results show: (1) Three groups of patients exist in the study. Grouping RBS correlations shows a response with either ECG interval or amplitude variation. A third group exists where neither ECG intervals nor amplitude variation correlate with breathing. (2) Previous work done by other groups analyzed SDMW. Similar results were found in this study but some subjects had higher SDMW, attributed to a large number of apneas, arousals and/or disconnects. SDMW does not need RBS to show apneic conditions exist within ECG recordings. (3) Results in this study support the assertion that autonomic nervous system variation was measured with SDMW. Measurements using RBS are not corrupted due to breathing even though respiration overlaps the same frequency band.^ Overall, this work becomes an Open Source resource which can be reused, modified and/or expanded. It might fast track additional research. In the future the system could also be used for public domain data. Prerecorded data exist in similar formats in public databases which could provide additional research opportunities. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent advances in electronic and computer technologies lead to wide-spread deployment of wireless sensor networks (WSNs). WSNs have wide range applications, including military sensing and tracking, environment monitoring, smart environments, etc. Many WSNs have mission-critical tasks, such as military applications. Thus, the security issues in WSNs are kept in the foreground among research areas. Compared with other wireless networks, such as ad hoc, and cellular networks, security in WSNs is more complicated due to the constrained capabilities of sensor nodes and the properties of the deployment, such as large scale, hostile environment, etc. Security issues mainly come from attacks. In general, the attacks in WSNs can be classified as external attacks and internal attacks. In an external attack, the attacking node is not an authorized participant of the sensor network. Cryptography and other security methods can prevent some of external attacks. However, node compromise, the major and unique problem that leads to internal attacks, will eliminate all the efforts to prevent attacks. Knowing the probability of node compromise will help systems to detect and defend against it. Although there are some approaches that can be used to detect and defend against node compromise, few of them have the ability to estimate the probability of node compromise. Hence, we develop basic uniform, basic gradient, intelligent uniform and intelligent gradient models for node compromise distribution in order to adapt to different application environments by using probability theory. These models allow systems to estimate the probability of node compromise. Applying these models in system security designs can improve system security and decrease the overheads nearly in every security area. Moreover, based on these models, we design a novel secure routing algorithm to defend against the routing security issue that comes from the nodes that have already been compromised but have not been detected by the node compromise detecting mechanism. The routing paths in our algorithm detour those nodes which have already been detected as compromised nodes or have larger probabilities of being compromised. Simulation results show that our algorithm is effective to protect routing paths from node compromise whether detected or not.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the rapid growth of the Internet, computer attacks are increasing at a fast pace and can easily cause millions of dollar in damage to an organization. Detecting these attacks is an important issue of computer security. There are many types of attacks and they fall into four main categories, Denial of Service (DoS) attacks, Probe, User to Root (U2R) attacks, and Remote to Local (R2L) attacks. Within these categories, DoS and Probe attacks continuously show up with greater frequency in a short period of time when they attack systems. They are different from the normal traffic data and can be easily separated from normal activities. On the contrary, U2R and R2L attacks are embedded in the data portions of the packets and normally involve only a single connection. It becomes difficult to achieve satisfactory detection accuracy for detecting these two attacks. Therefore, we focus on studying the ambiguity problem between normal activities and U2R/R2L attacks. The goal is to build a detection system that can accurately and quickly detect these two attacks. In this dissertation, we design a two-phase intrusion detection approach. In the first phase, a correlation-based feature selection algorithm is proposed to advance the speed of detection. Features with poor prediction ability for the signatures of attacks and features inter-correlated with one or more other features are considered redundant. Such features are removed and only indispensable information about the original feature space remains. In the second phase, we develop an ensemble intrusion detection system to achieve accurate detection performance. The proposed method includes multiple feature selecting intrusion detectors and a data mining intrusion detector. The former ones consist of a set of detectors, and each of them uses a fuzzy clustering technique and belief theory to solve the ambiguity problem. The latter one applies data mining technique to automatically extract computer users’ normal behavior from training network traffic data. The final decision is a combination of the outputs of feature selecting and data mining detectors. The experimental results indicate that our ensemble approach not only significantly reduces the detection time but also effectively detect U2R and R2L attacks that contain degrees of ambiguous information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As traffic congestion continues to worsen in large urban areas, solutions are urgently sought. However, transportation planning models, which estimate traffic volumes on transportation network links, are often unable to realistically consider travel time delays at intersections. Introducing signal controls in models often result in significant and unstable changes in network attributes, which, in turn, leads to instability of models. Ignoring the effect of delays at intersections makes the model output inaccurate and unable to predict travel time. To represent traffic conditions in a network more accurately, planning models should be capable of arriving at a network solution based on travel costs that are consistent with the intersection delays due to signal controls. This research attempts to achieve this goal by optimizing signal controls and estimating intersection delays accordingly, which are then used in traffic assignment. Simultaneous optimization of traffic routing and signal controls has not been accomplished in real-world applications of traffic assignment. To this end, a delay model dealing with five major types of intersections has been developed using artificial neural networks (ANNs). An ANN architecture consists of interconnecting artificial neurons. The architecture may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The ANN delay model has been trained using extensive simulations based on TRANSYT-7F signal optimizations. The delay estimates by the ANN delay model have percentage root-mean-squared errors (%RMSE) that are less than 25.6%, which is satisfactory for planning purposes. Larger prediction errors are typically associated with severely oversaturated conditions. A combined system has also been developed that includes the artificial neural network (ANN) delay estimating model and a user-equilibrium (UE) traffic assignment model. The combined system employs the Frank-Wolfe method to achieve a convergent solution. Because the ANN delay model provides no derivatives of the delay function, a Mesh Adaptive Direct Search (MADS) method is applied to assist in and expedite the iterative process of the Frank-Wolfe method. The performance of the combined system confirms that the convergence of the solution is achieved, although the global optimum may not be guaranteed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation proposed a self-organizing medium access control protocol (MAC) for wireless sensor networks (WSNs). The proposed MAC protocol, space division multiple access (SDMA), relies on sensor node position information and provides sensor nodes access to the wireless channel based on their spatial locations. SDMA divides a geographical area into space divisions, where there is one-to-one map between the space divisions and the time slots. Therefore, the MAC protocol requirement is the sensor node information of its position and a prior knowledge of the one-to-one mapping function. The scheme is scalable, self-maintaining, and self-starting. It provides collision-free access to the wireless channel for the sensor nodes thereby, guarantees delay-bounded communication in real time for delay sensitive applications. This work was divided into two parts: the first part involved the design of the mapping function to map the space divisions to the time slots. The mapping function is based on a uniform Latin square. A Uniform Latin square of order k = m 2 is an k x k square matrix that consists of k symbols from 0 to k-1 such that no symbol appears more than once in any row, in any column, or in any m x in area of main subsquares. The uniqueness of each symbol in the main subsquares presents very attractive characteristic in applying a uniform Latin square to time slot allocation problem in WSNs. The second part of this research involved designing a GPS free positioning system for position information. The system is called time and power based localization scheme (TPLS). TPLS is based on time difference of arrival (TDoA) and received signal strength (RSS) using radio frequency and ultrasonic signals to measure and detect the range differences from a sensor node to three anchor nodes. TPLS requires low computation overhead and no time synchronization, as the location estimation algorithm involved only a simple algebraic operation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Internet Protocol Television (IPTV) is a system where a digital television service is delivered by using Internet Protocol over a network infrastructure. There is considerable confusion and concern about the IPTV, since two different technologies have to be mended together to provide the end customers with some thing better than the conventional television. In this research, functional architecture of the IPTV system was investigated. Very Large Scale Integration based system for streaming server controller were designed and different ways of hosting a web server which can be used to send the control signals to the streaming server controller were studied. The web server accepts inputs from the keyboard and FPGA board switches and depending on the preset configuration the server will open a selected web page and also sends the control signals to the streaming server controller. It was observed that the applications run faster on PowerPC since it is embedded into the FPGA. Commercial market and Global deployment of IPTV were discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been well documented that traffic accidents that can be avoided occur when the motorists miss or ignore traffic signs. With the attention of drivers getting diverted due to distractions like cell phone conversations, missing traffic signs has become more prevalent. Also, poor weather and other unfriendly driving conditions sometimes makes the motorists not to be alert all the time and see every traffic sign on the road. Besides, most cars do not have any form of traffic assistance. Because of heavy traffic and proliferation of traffic signs on the roads, there is a need for a system that assists the driver not to miss a traffic sign to reduce the probability of an accident. Since visual information is critical for driving, processed video signals from cameras have been chosen to assist drivers. These inexpensive cameras can be easily mounted on the automobile. The objective of the present investigation and the traffic system development is to recognize the traffic signs electronically and alert drivers. For the case study and the system development, five important and critical traffic signs have been selected. They are: STOP, NO ENTER, NO RIGHT TURN, NO LEFT TURN, and YIELD. The system was evaluated processing still pictures taken from the public roads, and the recognition results were presented in an analysis table to indicate the correct identifications and the false ones. The system reached the acceptable recognition rate of 80% for all five traffic signs. The processing rate was about three seconds. The capabilities of MATLAB, VLSI design platforms and coding have been used to generate a visual warning to complement the visual driver support system with a Field Programmable Gate Array (FPGA) on a XUP Virtex-II Pro Development System.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wireless sensor networks are emerging as effective tools in the gathering and dissemination of data. They can be applied in many fields including health, environmental monitoring, home automation and the military. Like all other computing systems it is necessary to include security features, so that security sensitive data traversing the network is protected. However, traditional security techniques cannot be applied to wireless sensor networks. This is due to the constraints of battery power, memory, and the computational capacities of the miniature wireless sensor nodes. Therefore, to address this need, it becomes necessary to develop new lightweight security protocols. This dissertation focuses on designing a suite of lightweight trust-based security mechanisms and a cooperation enforcement protocol for wireless sensor networks. This dissertation presents a trust-based cluster head election mechanism used to elect new cluster heads. This solution prevents a major security breach against the routing protocol, namely, the election of malicious or compromised cluster heads. This dissertation also describes a location-aware, trust-based, compromise node detection, and isolation mechanism. Both of these mechanisms rely on the ability of a node to monitor its neighbors. Using neighbor monitoring techniques, the nodes are able to determine their neighbors’ reputation and trust level through probabilistic modeling. The mechanisms were designed to mitigate internal attacks within wireless sensor networks. The feasibility of the approach is demonstrated through extensive simulations. The dissertation also addresses non-cooperation problems in multi-user wireless sensor networks. A scalable lightweight enforcement algorithm using evolutionary game theory is also designed. The effectiveness of this cooperation enforcement algorithm is validated through mathematical analysis and simulation. This research has advanced the knowledge of wireless sensor network security and cooperation by developing new techniques based on mathematical models. By doing this, we have enabled others to build on our work towards the creation of highly trusted wireless sensor networks. This would facilitate its full utilization in many fields ranging from civilian to military applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation proposed a new approach to seizure detection in intracranial EEG recordings using nonlinear decision functions. It implemented well-established features that were designed to deal with complex signals such as brain recordings, and proposed a 2-D domain of analysis. Since the features considered assume both the time and frequency domains, the analysis was carried out both temporally and as a function of different frequency ranges in order to ascertain those measures that were most suitable for seizure detection. In retrospect, this study established a generalized approach to seizure detection that works across several features and across patients. ^ Clinical experiments involved 8 patients with intractable seizures that were evaluated for potential surgical interventions. A total of 35 iEEG data files collected were used in a training phase to ascertain the reliability of the formulated features. The remaining 69 iEEG data files were then used in the testing phase. ^ The testing phase revealed that the correlation sum is the feature that performed best across all patients with a sensitivity of 92% and an accuracy of 99%. The second best feature was the gamma power with a sensitivity of 92% and an accuracy of 96%. In the frequency domain, all of the 5 other spectral bands considered, revealed mixed results in terms of low sensitivity in some frequency bands and low accuracy in other frequency bands, which is expected given that the dominant frequencies in iEEG are those of the gamma band. In the time domain, other features which included mobility, complexity, and activity, all performed very well with an average a sensitivity of 80.3% and an accuracy of 95%. ^ The computational requirement needed for these nonlinear decision functions to be generated in the training phase was extremely long. It was determined that when the duration dimension was rescaled, the results improved and the convergence rates of the nonlinear decision functions were reduced dramatically by more than a 100 fold. Through this rescaling, the sensitivity of the correlation sum improved to 100% and the sensitivity of the gamma power to 97%, which meant that there were even less false negatives and false positives detected. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation established a state-of-the-art programming tool for designing and training artificial neural networks (ANNs) and showed its applicability to brain research. The developed tool, called NeuralStudio, allows users without programming skills to conduct studies based on ANNs in a powerful and very user friendly interface. A series of unique features has been implemented in NeuralStudio, such as ROC analysis, cross-validation, network averaging, topology optimization, and optimization of the activation function’s slopes. It also included a Support Vector Machines module for comparison purposes. Once the tool was fully developed, it was applied to two studies in brain research. In the first study, the goal was to create and train an ANN to detect epileptic seizures from subdural EEG. This analysis involved extracting features from the spectral power in the gamma frequencies. In the second application, a unique method was devised to link EEG recordings to epileptic and nonepileptic subjects. The contribution of this method consisted of developing a descriptor matrix that can be used to represent any EEG file regarding its duration and the number of electrodes. The first study showed that the inter-electrode mean of the spectral power in the gamma frequencies and its duration above a specific threshold performs better than the other frequencies in seizure detection, exhibiting an accuracy of 95.90%, a sensitivity of 92.59%, and a specificity of 96.84%. The second study yielded that Hjorth’s parameter activity is sufficient to accurately relate EEG to epileptic and non-epileptic subjects. After testing, accuracy, sensitivity and specificity of the classifier were all above 0.9667. Statistical tests measured the superiority of activity at over 99.99 % certainty. It was demonstrated that (1) the spectral power in the gamma frequencies is highly effective in locating seizures from EEG and (2) activity can be used to link EEG recordings to epileptic and non-epileptic subjects. These two studies required high computational load and could be addressed thanks to NeuralStudio. From a medical perspective, both methods proved the merits of NeuralStudio in brain research applications. For its outstanding features, NeuralStudio has been recently awarded a patent (US patent No. 7502763).