13 resultados para Fall and mobility sensor
em Digital Commons at Florida International University
Resumo:
Zinc oxide and graphene nanostructures are important technological materials because of their unique properties and potential applications in future generation of electronic and sensing devices. This dissertation investigates a brief account of the strategies to grow zinc oxide nanostructures (thin film and nanowire) and graphene, and their applications as enhanced field effect transistors, chemical sensors and transparent flexible electrodes. Nanostructured zinc oxide (ZnO) and low-gallium doped zinc oxide (GZO) thin films were synthesized by a magnetron sputtering process. Zinc oxide nanowires (ZNWs) were grown by a chemical vapor deposition method. Field effect transistors (FETs) of ZnO and GZO thin films and ZNWs were fabricated by standard photo and electron beam lithography processes. Electrical characteristics of these devices were investigated by nondestructive surface cleaning, ultraviolet irradiation treatment at high temperature and under vacuum. GZO thin film transistors showed a mobility of ∼5.7 cm2/V·s at low operation voltage of <5 V and a low turn-on voltage of ∼0.5 V with a sub threshold swing of ∼85 mV/decade. Bottom gated FET fabricated from ZNWs exhibit a very high on-to-off ratio (∼106) and mobility (∼28 cm2/V·s). A bottom gated FET showed large hysteresis of ∼5.0 to 8.0 V which was significantly reduced to ∼1.0 V by the surface treatment process. The results demonstrate charge transport in ZnO nanostructures strongly depends on its surface environmental conditions and can be explained by formation of depletion layer at the surface by various surface states. A nitric oxide (NO) gas sensor using single ZNW, functionalized with Cr nanoparticles was developed. The sensor exhibited average sensitivity of ∼46% and a minimum detection limit of ∼1.5 ppm for NO gas. The sensor also is selective towards NO gas as demonstrated by a cross sensitivity test with N2, CO and CO2 gases. Graphene film on copper foil was synthesized by chemical vapor deposition method. A hot press lamination process was developed for transferring graphene film to flexible polymer substrate. The graphene/polymer film exhibited a high quality, flexible transparent conductive structure with unique electrical-mechanical properties; ∼88.80% light transmittance and ∼1.1742Ω/sq k sheet resistance. The application of a graphene/polymer film as a flexible and transparent electrode for field emission displays was demonstrated.
Resumo:
Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as f-test is performed during each node’s split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.
Resumo:
Since 1995, Florida has been one of the leading states in the country initiating a high-stakes school accountability system. Public schools in Florida receive letter grades based on their performance on the Florida Comprehensive Assessment Test (FCAT). These school grades have significant effects on schools' reputations and funding. Consequently, the plan has been criticized for grading all schools in the same manner, without taking into account such variables as student poverty and mobility rates which are beyond the control of the school. ^ The purpose of this study was to examine the relationship of student variables (poverty and mobility rates) and teacher variables (average years of teacher experience and attained degree level) on FCAT math and reading performance. This research utilized an education production function model to examine which set of inputs (student or teacher) has a stronger influence on student academic output as measured by the FCAT. ^ The data collected for this study was from over 1500 public elementary schools in Florida that listed all pertinent information for 2 school years (1998/1999 & 1999/2000) on the Florida Department of Education's website. ^ It was concluded that student poverty, teacher average years of experience and student mobility taken together, provide a strong predictive measure of FCAT reading and math performance. However, the set of student inputs was significantly stronger than the teacher inputs. High student poverty was highly correlated with low FCAT scores. Teacher experience and student mobility rates were not nearly as strongly related to FCAT scores as was student poverty. The results of this study provide evidence for educators and other school stakeholders of the relative degree to which student and teacher variables are related to student academic achievement. The underlying reasons for these relationships will require further examination in future studies. These results raise questions for Florida's school policymakers about the educational equity of the state's accountability system and its implementation. ^
Resumo:
Today's wireless networks rely mostly on infrastructural support for their operation. With the concept of ubiquitous computing growing more popular, research on infrastructureless networks have been rapidly growing. However, such types of networks face serious security challenges when deployed. This dissertation focuses on designing a secure routing solution and trust modeling for these infrastructureless networks. ^ The dissertation presents a trusted routing protocol that is capable of finding a secure end-to-end route in the presence of malicious nodes acting either independently or in collusion, The solution protects the network from active internal attacks, known to be the most severe types of attacks in an ad hoc application. Route discovery is based on trust levels of the nodes, which need to be dynamically computed to reflect the malicious behavior in the network. As such, we have developed a trust computational model in conjunction with the secure routing protocol that analyzes the different malicious behavior and quantifies them in the model itself. Our work is the first step towards protecting an ad hoc network from colluding internal attack. To demonstrate the feasibility of the approach, extensive simulation has been carried out to evaluate the protocol efficiency and scalability with both network size and mobility. ^ This research has laid the foundation for developing a variety of techniques that will permit people to justifiably trust the use of ad hoc networks to perform critical functions, as well as to process sensitive information without depending on any infrastructural support and hence will enhance the use of ad hoc applications in both military and civilian domains. ^
Resumo:
Next-generation integrated wireless local area network (WLAN) and 3G cellular networks aim to take advantage of the roaming ability in a cellular network and the high data rate services of a WLAN. To ensure successful implementation of an integrated network, many issues must be carefully addressed, including network architecture design, resource management, quality-of-service (QoS), call admission control (CAC) and mobility management. ^ This dissertation focuses on QoS provisioning, CAC, and the network architecture design in the integration of WLANs and cellular networks. First, a new scheduling algorithm and a call admission control mechanism in IEEE 802.11 WLAN are presented to support multimedia services with QoS provisioning. The proposed scheduling algorithms make use of the idle system time to reduce the average packet loss of realtime (RT) services. The admission control mechanism provides long-term transmission quality for both RT and NRT services by ensuring the packet loss ratio for RT services and the throughput for non-real-time (NRT) services. ^ A joint CAC scheme is proposed to efficiently balance traffic load in the integrated environment. A channel searching and replacement algorithm (CSR) is developed to relieve traffic congestion in the cellular network by using idle channels in the WLAN. The CSR is optimized to minimize the system cost in terms of the blocking probability in the interworking environment. Specifically, it is proved that there exists an optimal admission probability for passive handoffs that minimizes the total system cost. Also, a method of searching the probability is designed based on linear-programming techniques. ^ Finally, a new integration architecture, Hybrid Coupling with Radio Access System (HCRAS), is proposed for lowering the average cost of intersystem communication (IC) and the vertical handoff latency. An analytical model is presented to evaluate the system performance of the HCRAS in terms of the intersystem communication cost function and the handoff cost function. Based on this model, an algorithm is designed to determine the optimal route for each intersystem communication. Additionally, a fast handoff algorithm is developed to reduce the vertical handoff latency.^
Resumo:
This dissertation proposed a self-organizing medium access control protocol (MAC) for wireless sensor networks (WSNs). The proposed MAC protocol, space division multiple access (SDMA), relies on sensor node position information and provides sensor nodes access to the wireless channel based on their spatial locations. SDMA divides a geographical area into space divisions, where there is one-to-one map between the space divisions and the time slots. Therefore, the MAC protocol requirement is the sensor node information of its position and a prior knowledge of the one-to-one mapping function. The scheme is scalable, self-maintaining, and self-starting. It provides collision-free access to the wireless channel for the sensor nodes thereby, guarantees delay-bounded communication in real time for delay sensitive applications. This work was divided into two parts: the first part involved the design of the mapping function to map the space divisions to the time slots. The mapping function is based on a uniform Latin square. A Uniform Latin square of order k = m 2 is an k x k square matrix that consists of k symbols from 0 to k-1 such that no symbol appears more than once in any row, in any column, or in any m x in area of main subsquares. The uniqueness of each symbol in the main subsquares presents very attractive characteristic in applying a uniform Latin square to time slot allocation problem in WSNs. The second part of this research involved designing a GPS free positioning system for position information. The system is called time and power based localization scheme (TPLS). TPLS is based on time difference of arrival (TDoA) and received signal strength (RSS) using radio frequency and ultrasonic signals to measure and detect the range differences from a sensor node to three anchor nodes. TPLS requires low computation overhead and no time synchronization, as the location estimation algorithm involved only a simple algebraic operation.
Resumo:
Structural vibration control is of great importance. Current active and passive vibration control strategies usually employ individual elements to fulfill this task, such as viscoelastic patches for providing damping, transducers for picking up signals and actuators for inputting actuating forces. The goal of this dissertation work is to design, manufacture, investigate and apply a new type of multifunctional composite material for structural vibration control. This new composite, which is based on multi-walled carbon nanotube (MWCNT) film, is potentially to function as free layer damping treatment and strain sensor simultaneously. That is, the new material integrates the transducer and the damping patch into one element. The multifunctional composite was prepared by sandwiching the MWCNT film between two adhesive layers. Static sensing test indicated that the MWCNT film sensor resistance changes almost linearly with the applied load. Sensor sensitivity factors were comparable to those of the foil strain gauges. Dynamic test indicated that the MWCNT film sensor can outperform the foil strain gage in high frequency ranges. Temperature test indicated the MWCNT sensor had good temperature stability over the range of 237 K-363 K. The Young’s modulus and shear modulus of the MWCNT film composite were acquired by nanoindentation test and direct shear test, respectively. A free vibration damping test indicated that the MWCNT composite sensor can also provide good damping without adding excessive weight to the base structure. A new model for sandwich structural vibration control was then proposed. In this new configuration, a cantilever beam covered with MWCNT composite on top and one layer of shape memory alloy (SMA) on the bottom was used to illustrate this concept. The MWCNT composite simultaneously serves as free layer damping and strain sensor, and the SMA acts as actuator. Simple on-off controller was designed for controlling the temperature of the SMA so as to control the SMA recovery stress as input and the system stiffness. Both free and forced vibrations were analyzed. Simulation work showed that this new configuration for sandwich structural vibration control was successful especially for low frequency system.
Resumo:
Recent advances in electronic and computer technologies lead to wide-spread deployment of wireless sensor networks (WSNs). WSNs have wide range applications, including military sensing and tracking, environment monitoring, smart environments, etc. Many WSNs have mission-critical tasks, such as military applications. Thus, the security issues in WSNs are kept in the foreground among research areas. Compared with other wireless networks, such as ad hoc, and cellular networks, security in WSNs is more complicated due to the constrained capabilities of sensor nodes and the properties of the deployment, such as large scale, hostile environment, etc. Security issues mainly come from attacks. In general, the attacks in WSNs can be classified as external attacks and internal attacks. In an external attack, the attacking node is not an authorized participant of the sensor network. Cryptography and other security methods can prevent some of external attacks. However, node compromise, the major and unique problem that leads to internal attacks, will eliminate all the efforts to prevent attacks. Knowing the probability of node compromise will help systems to detect and defend against it. Although there are some approaches that can be used to detect and defend against node compromise, few of them have the ability to estimate the probability of node compromise. Hence, we develop basic uniform, basic gradient, intelligent uniform and intelligent gradient models for node compromise distribution in order to adapt to different application environments by using probability theory. These models allow systems to estimate the probability of node compromise. Applying these models in system security designs can improve system security and decrease the overheads nearly in every security area. Moreover, based on these models, we design a novel secure routing algorithm to defend against the routing security issue that comes from the nodes that have already been compromised but have not been detected by the node compromise detecting mechanism. The routing paths in our algorithm detour those nodes which have already been detected as compromised nodes or have larger probabilities of being compromised. Simulation results show that our algorithm is effective to protect routing paths from node compromise whether detected or not.
Resumo:
The development cost of any civil infrastructure is very high; during its life span, the civil structure undergoes a lot of physical loads and environmental effects which damage the structure. Failing to identify this damage at an early stage may result in severe property loss and may become a potential threat to people and the environment. Thus, there is a need to develop effective damage detection techniques to ensure the safety and integrity of the structure. One of the Structural Health Monitoring methods to evaluate a structure is by using statistical analysis. In this study, a civil structure measuring 8 feet in length, 3 feet in diameter, embedded with thermocouple sensors at 4 different levels is analyzed under controlled and variable conditions. With the help of statistical analysis, possible damage to the structure was analyzed. The analysis could detect the structural defects at various levels of the structure.
Resumo:
The purpose of this research is design considerations for environmental monitoring platforms for the detection of hazardous materials using System-on-a-Chip (SoC) design. Design considerations focus on improving key areas such as: (1) sampling methodology; (2) context awareness; and (3) sensor placement. These design considerations for environmental monitoring platforms using wireless sensor networks (WSN) is applied to the detection of methylmercury (MeHg) and environmental parameters affecting its formation (methylation) and deformation (demethylation). ^ The sampling methodology investigates a proof-of-concept for the monitoring of MeHg using three primary components: (1) chemical derivatization; (2) preconcentration using the purge-and-trap (P&T) method; and (3) sensing using Quartz Crystal Microbalance (QCM) sensors. This study focuses on the measurement of inorganic mercury (Hg) (e.g., Hg2+) and applies lessons learned to organic Hg (e.g., MeHg) detection. ^ Context awareness of a WSN and sampling strategies is enhanced by using spatial analysis techniques, namely geostatistical analysis (i.e., classical variography and ordinary point kriging), to help predict the phenomena of interest in unmonitored locations (i.e., locations without sensors). This aids in making more informed decisions on control of the WSN (e.g., communications strategy, power management, resource allocation, sampling rate and strategy, etc.). This methodology improves the precision of controllability by adding potentially significant information of unmonitored locations.^ There are two types of sensors that are investigated in this study for near-optimal placement in a WSN: (1) environmental (e.g., humidity, moisture, temperature, etc.) and (2) visual (e.g., camera) sensors. The near-optimal placement of environmental sensors is found utilizing a strategy which minimizes the variance of spatial analysis based on randomly chosen points representing the sensor locations. Spatial analysis is employed using geostatistical analysis and optimization occurs with Monte Carlo analysis. Visual sensor placement is accomplished for omnidirectional cameras operating in a WSN using an optimal placement metric (OPM) which is calculated for each grid point based on line-of-site (LOS) in a defined number of directions where known obstacles are taken into consideration. Optimal areas of camera placement are determined based on areas generating the largest OPMs. Statistical analysis is examined by using Monte Carlo analysis with varying number of obstacles and cameras in a defined space. ^
Resumo:
In the discussion - Indirect Cost Factors in Menu Pricing – by David V. Pavesic, Associate Professor, Hotel, Restaurant and Travel Administration at Georgia State University, Associate Professor Pavesic initially states: “Rational pricing methodologies have traditionally employed quantitative factors to mark up food and beverage or food and labor because these costs can be isolated and allocated to specific menu items. There are, however, a number of indirect costs that can influence the price charged because they provide added value to the customer or are affected by supply/demand factors. The author discusses these costs and factors that must be taken into account in pricing decisions. Professor Pavesic offers as a given that menu pricing should cover costs, return a profit, reflect a value for the customer, and in the long run, attract customers and market the establishment. “Prices that are too high will drive customers away, and prices that are too low will sacrifice profit,” Professor Pavesic puts it succinctly. To dovetail with this premise the author provides that although food costs measure markedly into menu pricing, other factors such as equipment utilization, popularity/demand, and marketing are but a few of the parenthetic factors also to be considered. “… there is no single method that can be used to mark up every item on any given restaurant menu. One must employ a combination of methodologies and theories,” says Professor Pavesic. “Therefore, when properly carried out, prices will reflect food cost percentages, individual and/or weighted contribution margins, price points, and desired check averages, as well as factors driven by intuition, competition, and demand.” Additionally, Professor Pavesic wants you to know that value, as opposed to maximizing revenue, should be a primary motivating factor when designing menu pricing. This philosophy does come with certain caveats, and he explains them to you. Generically speaking, Professor Pavesic says, “The market ultimately determines the price one can charge.” But, in fine-tuning that decree he further offers, “Lower prices do not automatically translate into value and bargain in the minds of the customers. Having the lowest prices in your market may not bring customers or profit. “Too often operators engage in price wars through discount promotions and find that profits fall and their image in the marketplace is lowered,” Professor Pavesic warns. In reference to intangibles that influence menu pricing, service is at the top of the list. Ambience, location, amenities, product [i.e. food] presentation, and price elasticity are discussed as well. Be aware of price-value perception; Professor Pavesic explains this concept to you. Professor Pavesic closes with a brief overview of a la carte pricing; its pros and cons.
Resumo:
Variable Speed Limit (VSL) strategies identify and disseminate dynamic speed limits that are determined to be appropriate based on prevailing traffic conditions, road surface conditions, and weather conditions. This dissertation develops and evaluates a shockwave-based VSL system that uses a heuristic switching logic-based controller with specified thresholds of prevailing traffic flow conditions. The system aims to improve operations and mobility at critical bottlenecks. Before traffic breakdown occurrence, the proposed VSL’s goal is to prevent or postpone breakdown by decreasing the inflow and achieving uniform distribution in speed and flow. After breakdown occurrence, the VSL system aims to dampen traffic congestion by reducing the inflow traffic to the congested area and increasing the bottleneck capacity by deactivating the VSL at the head of the congested area. The shockwave-based VSL system pushes the VSL location upstream as the congested area propagates upstream. In addition to testing the system using infrastructure detector-based data, this dissertation investigates the use of Connected Vehicle trajectory data as input to the shockwave-based VSL system performance. Since the field Connected Vehicle data are not available, as part of this research, Vehicle-to-Infrastructure communication is modeled in the microscopic simulation to obtain individual vehicle trajectories. In this system, wavelet transform is used to analyze aggregated individual vehicles’ speed data to determine the locations of congestion. The currently recommended calibration procedures of simulation models are generally based on the capacity, volume and system-performance values and do not specifically examine traffic breakdown characteristics. However, since the proposed VSL strategies are countermeasures to the impacts of breakdown conditions, considering breakdown characteristics in the calibration procedure is important to have a reliable assessment. Several enhancements were proposed in this study to account for the breakdown characteristics at bottleneck locations in the calibration process. In this dissertation, performance of shockwave-based VSL is compared to VSL systems with different fixed VSL message sign locations utilizing the calibrated microscopic model. The results show that shockwave-based VSL outperforms fixed-location VSL systems, and it can considerably decrease the maximum back of queue and duration of breakdown while increasing the average speed during breakdown.
Resumo:
The tragic events of September 11th ushered a new era of unprecedented challenges. Our nation has to be protected from the alarming threats of adversaries. These threats exploit the nation's critical infrastructures affecting all sectors of the economy. There is the need for pervasive monitoring and decentralized control of the nation's critical infrastructures. The communications needs of monitoring and control of critical infrastructures was traditionally catered for by wired communication systems. These technologies ensured high reliability and bandwidth but are however very expensive, inflexible and do not support mobility and pervasive monitoring. The communication protocols are Ethernet-based that used contention access protocols which results in high unsuccessful transmission and delay. An emerging class of wireless networks, named embedded wireless sensor and actuator networks has potential benefits for real-time monitoring and control of critical infrastructures. The use of embedded wireless networks for monitoring and control of critical infrastructures requires secure, reliable and timely exchange of information among controllers, distributed sensors and actuators. The exchange of information is over shared wireless media. However, wireless media is highly unpredictable due to path loss, shadow fading and ambient noise. Monitoring and control applications have stringent requirements on reliability, delay and security. The primary issue addressed in this dissertation is the impact of wireless media in harsh industrial environment on the reliable and timely delivery of critical data. In the first part of the dissertation, a combined networking and information theoretic approach was adopted to determine the transmit power required to maintain a minimum wireless channel capacity for reliable data transmission. The second part described a channel-aware scheduling scheme that ensured efficient utilization of the wireless link and guaranteed delay. Various analytical evaluations and simulations are used to evaluate and validate the feasibility of the methodologies and demonstrate that the protocols achieved reliable and real-time data delivery in wireless industrial networks.