22 resultados para Multi-platform Xamarin Mobile-computing
em Digital Commons at Florida International University
Resumo:
The deployment of wireless communications coupled with the popularity of portable devices has led to significant research in the area of mobile data caching. Prior research has focused on the development of solutions that allow applications to run in wireless environments using proxy based techniques. Most of these approaches are semantic based and do not provide adequate support for representing the context of a user (i.e., the interpreted human intention.). Although the context may be treated implicitly it is still crucial to data management. In order to address this challenge this dissertation focuses on two characteristics: how to predict (i) the future location of the user and (ii) locations of the fetched data where the queried data item has valid answers. Using this approach, more complete information about the dynamics of an application environment is maintained. ^ The contribution of this dissertation is a novel data caching mechanism for pervasive computing environments that can adapt dynamically to a mobile user's context. In this dissertation, we design and develop a conceptual model and context aware protocols for wireless data caching management. Our replacement policy uses the validity of the data fetched from the server and the neighboring locations to decide which of the cache entries is less likely to be needed in the future, and therefore a good candidate for eviction when cache space is needed. The context aware driven prefetching algorithm exploits the query context to effectively guide the prefetching process. The query context is defined using a mobile user's movement pattern and requested information context. Numerical results and simulations show that the proposed prefetching and replacement policies significantly outperform conventional ones. ^ Anticipated applications of these solutions include biomedical engineering, tele-health, medical information systems and business. ^
Resumo:
This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the reusability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective, providing new features and enriching the mobile user’s experience through a broad scope of potential applications.
Resumo:
This dissertation studies the context-aware application with its proposed algorithms at client side. The required context-aware infrastructure is discussed in depth to illustrate that such an infrastructure collects the mobile user’s context information, registers service providers, derives mobile user’s current context, distributes user context among context-aware applications, and provides tailored services. The approach proposed tries to strike a balance between the context server and mobile devices. The context acquisition is centralized at the server to ensure the usability of context information among mobile devices, while context reasoning remains at the application level. Hence, a centralized context acquisition and distributed context reasoning are viewed as a better solution overall. The context-aware search application is designed and implemented at the server side. A new algorithm is proposed to take into consideration the user context profiles. By promoting feedback on the dynamics of the system, any prior user selection is now saved for further analysis such that it may contribute to help the results of a subsequent search. On the basis of these developments at the server side, various solutions are consequently provided at the client side. A proxy software-based component is set up for the purpose of data collection. This research endorses the belief that the proxy at the client side should contain the context reasoning component. Implementation of such a component provides credence to this belief in that the context applications are able to derive the user context profiles. Furthermore, a context cache scheme is implemented to manage the cache on the client device in order to minimize processing requirements and other resources (bandwidth, CPU cycle, power). Java and MySQL platforms are used to implement the proposed architecture and to test scenarios derived from user’s daily activities. To meet the practical demands required of a testing environment without the impositions of a heavy cost for establishing such a comprehensive infrastructure, a software simulation using a free Yahoo search API is provided as a means to evaluate the effectiveness of the design approach in a most realistic way. The integration of Yahoo search engine into the context-aware architecture design proves how context aware application can meet user demands for tailored services and products in and around the user’s environment. The test results show that the overall design is highly effective,providing new features and enriching the mobile user’s experience through a broad scope of potential applications.
Resumo:
The Semantic Binary Data Model (SBM) is a viable alternative to the now-dominant relational data model. SBM would be especially advantageous for applications dealing with complex interrelated networks of objects provided that a robust efficient implementation can be achieved. This dissertation presents an implementation design method for SBM, algorithms, and their analytical and empirical evaluation. Our method allows building a robust and flexible database engine with a wider applicability range and improved performance. ^ Extensions to SBM are introduced and an implementation of these extensions is proposed that allows the database engine to efficiently support applications with a predefined set of queries. A New Record data structure is proposed. Trade-offs of employing Fact, Record and Bitmap Data structures for storing information in a semantic database are analyzed. ^ A clustering ID distribution algorithm and an efficient algorithm for object ID encoding are proposed. Mapping to an XML data model is analyzed and a new XML-based XSDL language facilitating interoperability of the system is defined. Solutions to issues associated with making the database engine multi-platform are presented. An improvement to the atomic update algorithm suitable for certain scenarios of database recovery is proposed. ^ Specific guidelines are devised for implementing a robust and well-performing database engine based on the extended Semantic Data Model. ^
Resumo:
For the past several decades, we have experienced the tremendous growth, in both scale and scope, of real-time embedded systems, thanks largely to the advances in IC technology. However, the traditional approach to get performance boost by increasing CPU frequency has been a way of past. Researchers from both industry and academia are turning their focus to multi-core architectures for continuous improvement of computing performance. In our research, we seek to develop efficient scheduling algorithms and analysis methods in the design of real-time embedded systems on multi-core platforms. Real-time systems are the ones with the response time as critical as the logical correctness of computational results. In addition, a variety of stringent constraints such as power/energy consumption, peak temperature and reliability are also imposed to these systems. Therefore, real-time scheduling plays a critical role in design of such computing systems at the system level. We started our research by addressing timing constraints for real-time applications on multi-core platforms, and developed both partitioned and semi-partitioned scheduling algorithms to schedule fixed priority, periodic, and hard real-time tasks on multi-core platforms. Then we extended our research by taking temperature constraints into consideration. We developed a closed-form solution to capture temperature dynamics for a given periodic voltage schedule on multi-core platforms, and also developed three methods to check the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research by incorporating the power/energy constraint with thermal awareness into our research problem. We investigated the energy estimation problem on multi-core platforms, and developed a computation efficient method to calculate the energy consumption for a given voltage schedule on a multi-core platform. In this dissertation, we present our research in details and demonstrate the effectiveness and efficiency of our approaches with extensive experimental results.
Resumo:
The Internet has become an integral part of our nation's critical socio-economic infrastructure. With its heightened use and growing complexity however, organizations are at greater risk of cyber crimes. To aid in the investigation of crimes committed on or via the Internet, a network forensics analysis tool pulls together needed digital evidence. It provides a platform for performing deep network analysis by capturing, recording and analyzing network events to find out the source of a security attack or other information security incidents. Existing network forensics work has been mostly focused on the Internet and fixed networks. But the exponential growth and use of wireless technologies, coupled with their unprecedented characteristics, necessitates the development of new network forensic analysis tools. This dissertation fostered the emergence of a new research field in cellular and ad-hoc network forensics. It was one of the first works to identify this problem and offer fundamental techniques and tools that laid the groundwork for future research. In particular, it introduced novel methods to record network incidents and report logged incidents. For recording incidents, location is considered essential to documenting network incidents. However, in network topology spaces, location cannot be measured due to absence of a 'distance metric'. Therefore, a novel solution was proposed to label locations of nodes within network topology spaces, and then to authenticate the identity of nodes in ad hoc environments. For reporting logged incidents, a novel technique based on Distributed Hash Tables (DHT) was adopted. Although the direct use of DHTs for reporting logged incidents would result in an uncontrollably recursive traffic, a new mechanism was introduced that overcome this recursive process. These logging and reporting techniques aided forensics over cellular and ad-hoc networks, which in turn increased their ability to track and trace attacks to their source. These techniques were a starting point for further research and development that would result in equipping future ad hoc networks with forensic components to complement existing security mechanisms.
Resumo:
The Internet has become an integral part of our nation’s critical socio-economic infrastructure. With its heightened use and growing complexity however, organizations are at greater risk of cyber crimes. To aid in the investigation of crimes committed on or via the Internet, a network forensics analysis tool pulls together needed digital evidence. It provides a platform for performing deep network analysis by capturing, recording and analyzing network events to find out the source of a security attack or other information security incidents. Existing network forensics work has been mostly focused on the Internet and fixed networks. But the exponential growth and use of wireless technologies, coupled with their unprecedented characteristics, necessitates the development of new network forensic analysis tools. This dissertation fostered the emergence of a new research field in cellular and ad-hoc network forensics. It was one of the first works to identify this problem and offer fundamental techniques and tools that laid the groundwork for future research. In particular, it introduced novel methods to record network incidents and report logged incidents. For recording incidents, location is considered essential to documenting network incidents. However, in network topology spaces, location cannot be measured due to absence of a ‘distance metric’. Therefore, a novel solution was proposed to label locations of nodes within network topology spaces, and then to authenticate the identity of nodes in ad hoc environments. For reporting logged incidents, a novel technique based on Distributed Hash Tables (DHT) was adopted. Although the direct use of DHTs for reporting logged incidents would result in an uncontrollably recursive traffic, a new mechanism was introduced that overcome this recursive process. These logging and reporting techniques aided forensics over cellular and ad-hoc networks, which in turn increased their ability to track and trace attacks to their source. These techniques were a starting point for further research and development that would result in equipping future ad hoc networks with forensic components to complement existing security mechanisms.
Resumo:
Recently, wireless network technology has grown at such a pace that scientific research has become a practical reality in a very short time span. Mobile wireless communications have witnessed the adoption of several generations, each of them complementing and improving the former. One mobile system that features high data rates and open network architecture is 4G. Currently, the research community and industry, in the field of wireless networks, are working on possible choices for solutions in the 4G system. 4G is a collection of technologies and standards that will allow a range of ubiquitous computing and wireless communication architectures. The researcher considers one of the most important characteristics of future 4G mobile systems the ability to guarantee reliable communications from 100 Mbps, in high mobility links, to as high as 1 Gbps for low mobility users, in addition to high efficiency in the spectrum usage. On mobile wireless communications networks, one important factor is the coverage of large geographical areas. In 4G systems, a hybrid satellite/terrestrial network is crucial to providing users with coverage wherever needed. Subscribers thus require a reliable satellite link to access their services when they are in remote locations, where a terrestrial infrastructure is unavailable. Thus, they must rely upon satellite coverage. Good modulation and access technique are also required in order to transmit high data rates over satellite links to mobile users. This technique must adapt to the characteristics of the satellite channel and also be efficient in the use of allocated bandwidth. Satellite links are fading channels, when used by mobile users. Some measures designed to approach these fading environments make use of: (1) spatial diversity (two receive antenna configuration); (2) time diversity (channel interleaver/spreading techniques); and (3) upper layer FEC. The author proposes the use of OFDM (Orthogonal Frequency Multiple Access) for the satellite link by increasing the time diversity. This technique will allow for an increase of the data rate, as primarily required by multimedia applications, and will also optimally use the available bandwidth. In addition, this dissertation approaches the use of Cooperative Satellite Communications for hybrid satellite/terrestrial networks. By using this technique, the satellite coverage can be extended to areas where there is no direct link to the satellite. For this purpose, a good channel model is necessary.
Resumo:
Global connectivity, for anyone, at anyplace, at anytime, to provide high-speed, high-quality, and reliable communication channels for mobile devices, is now becoming a reality. The credit mainly goes to the recent technological advances in wireless communications comprised of a wide range of technologies, services, and applications to fulfill the particular needs of end-users in different deployment scenarios (Wi-Fi, WiMAX, and 3G/4G cellular systems). In such a heterogeneous wireless environment, one of the key ingredients to provide efficient ubiquitous computing with guaranteed quality and continuity of service is the design of intelligent handoff algorithms. Traditional single-metric handoff decision algorithms, such as Received Signal Strength (RSS) based, are not efficient and intelligent enough to minimize the number of unnecessary handoffs, decision delays, and call-dropping and/or blocking probabilities. This research presented a novel approach for the design and implementation of a multi-criteria vertical handoff algorithm for heterogeneous wireless networks. Several parallel Fuzzy Logic Controllers were utilized in combination with different types of ranking algorithms and metric weighting schemes to implement two major modules: the first module estimated the necessity of handoff, and the other module was developed to select the best network as the target of handoff. Simulations based on different traffic classes, utilizing various types of wireless networks were carried out by implementing a wireless test-bed inspired by the concept of Rudimentary Network Emulator (RUNE). Simulation results indicated that the proposed scheme provided better performance in terms of minimizing the unnecessary handoffs, call dropping, and call blocking and handoff blocking probabilities. When subjected to Conversational traffic and compared against the RSS-based reference algorithm, the proposed scheme, utilizing the FTOPSIS ranking algorithm, was able to reduce the average outage probability of MSs moving with high speeds by 17%, new call blocking probability by 22%, the handoff blocking probability by 16%, and the average handoff rate by 40%. The significant reduction in the resulted handoff rate provides MS with efficient power consumption, and more available battery life. These percentages indicated a higher probability of guaranteed session continuity and quality of the currently utilized service, resulting in higher user satisfaction levels.
Resumo:
Catering to society's demand for high performance computing, billions of transistors are now integrated on IC chips to deliver unprecedented performances. With increasing transistor density, the power consumption/density is growing exponentially. The increasing power consumption directly translates to the high chip temperature, which not only raises the packaging/cooling costs, but also degrades the performance/reliability and life span of the computing systems. Moreover, high chip temperature also greatly increases the leakage power consumption, which is becoming more and more significant with the continuous scaling of the transistor size. As the semiconductor industry continues to evolve, power and thermal challenges have become the most critical challenges in the design of new generations of computing systems. ^ In this dissertation, we addressed the power/thermal issues from the system-level perspective. Specifically, we sought to employ real-time scheduling methods to optimize the power/thermal efficiency of the real-time computing systems, with leakage/ temperature dependency taken into consideration. In our research, we first explored the fundamental principles on how to employ dynamic voltage scaling (DVS) techniques to reduce the peak operating temperature when running a real-time application on a single core platform. We further proposed a novel real-time scheduling method, “M-Oscillations” to reduce the peak temperature when scheduling a hard real-time periodic task set. We also developed three checking methods to guarantee the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research from single core platform to multi-core platform. We investigated the energy estimation problem on the multi-core platforms and developed a light weight and accurate method to calculate the energy consumption for a given voltage schedule on a multi-core platform. Finally, we concluded the dissertation with elaborated discussions of future extensions of our research. ^
Resumo:
The ability to use Software Defined Radio (SDR) in the civilian mobile applications will make it possible for the next generation of mobile devices to handle multi-standard personal wireless devices and ubiquitous wireless devices. The original military standard created many beneficial characteristics for SDR, but resulted in a number of disadvantages as well. Many challenges in commercializing SDR are still the subject of interest in the software radio research community. Four main issues that have been already addressed are performance, size, weight, and power. ^ This investigation presents an in-depth study of SDR inter-components communications in terms of total link delay related to the number of components and packet sizes in systems based on Software Communication Architecture (SCA). The study is based on the investigation of the controlled environment platform. Results suggest that the total link delay does not linearly increase with the number of components and the packet sizes. The closed form expression of the delay was modeled using a logistic function in terms of the number of components and packet sizes. The model performed well when the number of components was large. ^ Based upon the mobility applications, energy consumption has become one of the most crucial limitations. SDR will not only provide flexibility of multi-protocol support, but this desirable feature will also bring a choice of mobile protocols. Having such a variety of choices available creates a problem in the selection of the most appropriate protocol to transmit. An investigation in a real-time algorithm to optimize energy efficiency was also performed. Communication energy models were used including switching estimation to develop a waveform selection algorithm. Simulations were performed to validate the concept.^
Resumo:
The improvement in living standards and the development of telecommunications have led to a large increase in the number of Internet users in China. It has been reported by China National Network Information Center that the number of Internet users in China has reached 33.7 million in 2001, ranting the country third in the world. This figure also shows that more and more Chinese residents have accepted the Internet and use it to obtain information and compete their travel planning. Milne and Ateljevic stated that the integration of computing and telecommunications would create a global information network based mostly on the Internet. The Internet, especially the World Wide Web, has had a great impact on the hospitality and tourism industry in recent years. The WWW plays an important role in mediating between customers and hotel companies as a place to acquire information acquisition and transact business.
Resumo:
This research was undertaken to explore dimensions of the risk construct, identify factors related to risk-taking in education, and study risk propensity among employees at a community college. Risk-taking propensity (RTP) was measured by the 12-item BCDQ, which consisted of personal and professional risk-related situations balanced for the money, reputation, and satisfaction dimensions of the risk construct. Scoring ranged from 1.00 (most cautious) to 6.00 (most risky).^ Surveys including the BCDQ and seven demographic questions relating to age, gender, professional status, length of service, academic discipline, highest degree, and campus location were sent to faculty, administrators, and academic department heads. A total of 325 surveys were returned, resulting in a 66.7% response rate. Subjects were relatively homogeneous for age, length of service, and highest degree.^ Subjects were also homogeneous for risk-taking propensity: no substantive differences in RTP scores were noted within and among demographic groups, with the possible exception of academic discipline. The mean RTP score for all subjects was 3.77, for faculty was 3.76, for administrators was 3.83, and for department heads was 3.64.^ The relationship between propensity to take personal risks and propensity to take professional risks was tested by computing Pearson r correlation coefficients. The relationships for the total sample, faculty, and administrator groups were statistically significant, but of limited practical significance. Subjects were placed into risk categories by dividing the response scale into thirds. A 3 x 3 factorial ANOVA revealed no interaction effects between professional status and risk category with regard to RTP score. A discriminant analysis showed that a seven-factor model was not effective in predicting risk category.^ The homogeneity of the study sample and the effect of a risk-encouraging environment were discussed in the context of the community college. Since very little data on risk-taking in education is available, risk propensity data from this study could serve as a basis for comparison to future research. Results could be used by institutions to plan professional development activities, designed to increase risk-taking and encourage active acceptance of change. ^
Resumo:
In recent years, there has been an enormous growth of location-aware devices, such as GPS embedded cell phones, mobile sensors and radio-frequency identification tags. The age of combining sensing, processing and communication in one device, gives rise to a vast number of applications leading to endless possibilities and a realization of mobile Wireless Sensor Network (mWSN) applications. As computing, sensing and communication become more ubiquitous, trajectory privacy becomes a critical piece of information and an important factor for commercial success. While on the move, sensor nodes continuously transmit data streams of sensed values and spatiotemporal information, known as ``trajectory information". If adversaries can intercept this information, they can monitor the trajectory path and capture the location of the source node. ^ This research stems from the recognition that the wide applicability of mWSNs will remain elusive unless a trajectory privacy preservation mechanism is developed. The outcome seeks to lay a firm foundation in the field of trajectory privacy preservation in mWSNs against external and internal trajectory privacy attacks. First, to prevent external attacks, we particularly investigated a context-based trajectory privacy-aware routing protocol to prevent the eavesdropping attack. Traditional shortest-path oriented routing algorithms give adversaries the possibility to locate the target node in a certain area. We designed the novel privacy-aware routing phase and utilized the trajectory dissimilarity between mobile nodes to mislead adversaries about the location where the message started its journey. Second, to detect internal attacks, we developed a software-based attestation solution to detect compromised nodes. We created the dynamic attestation node chain among neighboring nodes to examine the memory checksum of suspicious nodes. The computation time for memory traversal had been improved compared to the previous work. Finally, we revisited the trust issue in trajectory privacy preservation mechanism designs. We used Bayesian game theory to model and analyze cooperative, selfish and malicious nodes' behaviors in trajectory privacy preservation activities.^
Resumo:
Cloud computing realizes the long-held dream of converting computing capability into a type of utility. It has the potential to fundamentally change the landscape of the IT industry and our way of life. However, as cloud computing expanding substantially in both scale and scope, ensuring its sustainable growth is a critical problem. Service providers have long been suffering from high operational costs. Especially the costs associated with the skyrocketing power consumption of large data centers. In the meantime, while efficient power/energy utilization is indispensable for the sustainable growth of cloud computing, service providers must also satisfy a user's quality of service (QoS) requirements. This problem becomes even more challenging considering the increasingly stringent power/energy and QoS constraints, as well as other factors such as the highly dynamic, heterogeneous, and distributed nature of the computing infrastructures, etc. ^ In this dissertation, we study the problem of delay-sensitive cloud service scheduling for the sustainable development of cloud computing. We first focus our research on the development of scheduling methods for delay-sensitive cloud services on a single server with the goal of maximizing a service provider's profit. We then extend our study to scheduling cloud services in distributed environments. In particular, we develop a queue-based model and derive efficient request dispatching and processing decisions in a multi-electricity-market environment to improve the profits for service providers. We next study a problem of multi-tier service scheduling. By carefully assigning sub deadlines to the service tiers, our approach can significantly improve resource usage efficiencies with statistically guaranteed QoS. Finally, we study the power conscious resource provision problem for service requests with different QoS requirements. By properly sharing computing resources among different requests, our method statistically guarantees all QoS requirements with a minimized number of powered-on servers and thus the power consumptions. The significance of our research is that it is one part of the integrated effort from both industry and academia to ensure the sustainable growth of cloud computing as it continues to evolve and change our society profoundly.^