52 resultados para Context data
em Digital Commons at Florida International University
Resumo:
The deployment of wireless communications coupled with the popularity of portable devices has led to significant research in the area of mobile data caching. Prior research has focused on the development of solutions that allow applications to run in wireless environments using proxy based techniques. Most of these approaches are semantic based and do not provide adequate support for representing the context of a user (i.e., the interpreted human intention.). Although the context may be treated implicitly it is still crucial to data management. In order to address this challenge this dissertation focuses on two characteristics: how to predict (i) the future location of the user and (ii) locations of the fetched data where the queried data item has valid answers. Using this approach, more complete information about the dynamics of an application environment is maintained. ^ The contribution of this dissertation is a novel data caching mechanism for pervasive computing environments that can adapt dynamically to a mobile user's context. In this dissertation, we design and develop a conceptual model and context aware protocols for wireless data caching management. Our replacement policy uses the validity of the data fetched from the server and the neighboring locations to decide which of the cache entries is less likely to be needed in the future, and therefore a good candidate for eviction when cache space is needed. The context aware driven prefetching algorithm exploits the query context to effectively guide the prefetching process. The query context is defined using a mobile user's movement pattern and requested information context. Numerical results and simulations show that the proposed prefetching and replacement policies significantly outperform conventional ones. ^ Anticipated applications of these solutions include biomedical engineering, tele-health, medical information systems and business. ^
Resumo:
This study analyzed the health and overall landcover of citrus crops in Florida. The analysis was completed using Landsat satellite imagery available free of charge from the University of Maryland Global Landcover Change Facility. The project hypothesized that combining citrus production (economic) data with citrus area per county derived from spectral signatures would yield correlations between observable spectral reflectance throughout the year, and the fiscal impact of citrus on local economies. A positive correlation between these two data types would allow us to predict the economic impact of citrus using spectral data analysis to determine final crop harvests.
Resumo:
The purpose of this descriptive study was to evaluate the banking and insurance technology curriculum at ten junior colleges in Taiwan. The study focused on curriculum, curriculum materials, instruction, support services, student achievement and job performance. Data was collected from a diverse sample of faculty, students, alumni, and employers. ^ Questionnaires on the evaluation of curriculum at technical junior colleges were developed for use in this specific case. Data were collected from the sample described above and analyzed utilizing ANOVA, T-Tests and crosstabulations. Findings are presented which indicate that there is room for improvement in terms of meeting individual students' needs. ^ Using Stufflebeam's CIPP model for curriculum evaluation it was determined that the curriculum was adequate in terms of the knowledge and skills imparted to students. However, students were dissatisfied with the rigidity of the curriculum and the lack of opportunity to satisfy the individual needs of students. Employers were satisfied with both the academic preparation of students and their on the job performance. ^ In sum, the curriculum of the two-year banking and insurance technology programs of junior college in Taiwan was shown to have served adequately preparing a work force to enter businesses. It is now time to look toward the future and adapt the curriculum and instruction for the future needs of the ever evolving high-tech society. ^
Resumo:
This research presents several components encompassing the scope of the objective of Data Partitioning and Replication Management in Distributed GIS Database. Modern Geographic Information Systems (GIS) databases are often large and complicated. Therefore data partitioning and replication management problems need to be addresses in development of an efficient and scalable solution. ^ Part of the research is to study the patterns of geographical raster data processing and to propose the algorithms to improve availability of such data. These algorithms and approaches are targeting granularity of geographic data objects as well as data partitioning in geographic databases to achieve high data availability and Quality of Service(QoS) considering distributed data delivery and processing. To achieve this goal a dynamic, real-time approach for mosaicking digital images of different temporal and spatial characteristics into tiles is proposed. This dynamic approach reuses digital images upon demand and generates mosaicked tiles only for the required region according to user's requirements such as resolution, temporal range, and target bands to reduce redundancy in storage and to utilize available computing and storage resources more efficiently. ^ Another part of the research pursued methods for efficient acquiring of GIS data from external heterogeneous databases and Web services as well as end-user GIS data delivery enhancements, automation and 3D virtual reality presentation. ^ There are vast numbers of computing, network, and storage resources idling or not fully utilized available on the Internet. Proposed "Crawling Distributed Operating System "(CDOS) approach employs such resources and creates benefits for the hosts that lend their CPU, network, and storage resources to be used in GIS database context. ^ The results of this dissertation demonstrate effective ways to develop a highly scalable GIS database. The approach developed in this dissertation has resulted in creation of TerraFly GIS database that is used by US government, researchers, and general public to facilitate Web access to remotely-sensed imagery and GIS vector information. ^
Resumo:
The purpose of this study was to determine if an experimental context-based delivery format for mathematics would be more effective than a traditional model for increasing the performance in mathematics of at-risk students in a public high school of choice, as evidenced by significant gains in achievement on the standards-based Mathematics subtest of the FCAT and final academic grades in Algebra I. The guiding rationale for this approach is captured in the Secretary's Commission on Achieving Necessary Skills (SCANS) report of 1992 that resulted in school-to-work initiatives (United States Department of Labor). Also, the charge for educational reform has been codified at the state level as Educational Accountability Act of 1971 (Florida Statutes, 1995) and at the national level as embodied in the No Child Left Behind Act of 2001. A particular focus of educational reform is low performing, at-risk students. ^ This dissertation explored the effects of a context-based curricular reform designed to enhance the content of Algebra I content utilizing a research design consisting of two delivery models: a traditional content-based course; and, a thematically structured, content-based course. In this case, the thematic element was business education as there are many advocates in career education who assert that this format engages students who are often otherwise disinterested in mathematics in a relevant, SCANS skills setting. The subjects in each supplementary course were ninth grade students who were both low performers in eighth grade mathematics and who had not passed the eighth grade administration of the standards-based FCAT Mathematics subtest. The sample size was limited to two groups of 25 students and two teachers. The site for this study was a public charter school. Student-generated performance data were analyzed using descriptive statistics. ^ Results indicated that contrary to the beliefs held by many, contextual presentation of content did not cause significant gains in either academic performance or test performance for those in the experimental treatment group. Further, results indicated that there was no meaningful difference in performance between the two groups. ^
Resumo:
The primary aim of this dissertation is to develop data mining tools for knowledge discovery in biomedical data when multiple (homogeneous or heterogeneous) sources of data are available. The central hypothesis is that, when information from multiple sources of data are used appropriately and effectively, knowledge discovery can be better achieved than what is possible from only a single source. ^ Recent advances in high-throughput technology have enabled biomedical researchers to generate large volumes of diverse types of data on a genome-wide scale. These data include DNA sequences, gene expression measurements, and much more; they provide the motivation for building analysis tools to elucidate the modular organization of the cell. The challenges include efficiently and accurately extracting information from the multiple data sources; representing the information effectively, developing analytical tools, and interpreting the results in the context of the domain. ^ The first part considers the application of feature-level integration to design classifiers that discriminate between soil types. The machine learning tools, SVM and KNN, were used to successfully distinguish between several soil samples. ^ The second part considers clustering using multiple heterogeneous data sources. The resulting Multi-Source Clustering (MSC) algorithm was shown to have a better performance than clustering methods that use only a single data source or a simple feature-level integration of heterogeneous data sources. ^ The third part proposes a new approach to effectively incorporate incomplete data into clustering analysis. Adapted from K-means algorithm, the Generalized Constrained Clustering (GCC) algorithm makes use of incomplete data in the form of constraints to perform exploratory analysis. Novel approaches for extracting constraints were proposed. For sufficiently large constraint sets, the GCC algorithm outperformed the MSC algorithm. ^ The last part considers the problem of providing a theme-specific environment for mining multi-source biomedical data. The database called PlasmoTFBM, focusing on gene regulation of Plasmodium falciparum, contains diverse information and has a simple interface to allow biologists to explore the data. It provided a framework for comparing different analytical tools for predicting regulatory elements and for designing useful data mining tools. ^ The conclusion is that the experiments reported in this dissertation strongly support the central hypothesis.^
Resumo:
The main objective is to exhibit how usage data from new media can be used to assess areas where students need more help in creating their ETDs. After attending this session, attendees will be able to use usage data from new media, in conjunction with traditional assessment data, to identify strengths and weaknesses in ETD training and resources. The burgeoning ETD program at Florida International University (FIU) has provided many opportunities to experiment with assessment strategies and new media. The usage statistics from YouTube and the ETD LibGuide revealed areas of strength and weakness in the training resources and the overall ETD training initiative. With the ability to assess these materials, they have been updated to better meet student needs. In addition to these assessment tools, there are opportunities to connect these statistics with data from a common error checklist, student feedback from ETD workshops, and final ETD submission surveys to create a full-fledged outcome based assessment program for the ETD initiative.
Resumo:
long-term research on freshwater ecosystems provides insights that can be difficult to obtain from other approaches. Widespread monitoring of ecologically relevant water-quality parameters spanning decades can facilitate important tests of ecological principles. Unique long-term data sets and analytical tools are increasingly available, allowing for powerful and synthetic analyses across sites. long-term measurements or experiments in aquatic systems can catch rare events, changes in highly variable systems, time-lagged responses, cumulative effects of stressors, and biotic responses that encompass multiple generations. Data are available from formal networks, local to international agencies, private organizations, various institutions, and paleontological and historic records; brief literature surveys suggest much existing data are not synthesized. Ecological sciences will benefit from careful maintenance and analyses of existing long-term programs, and subsequent insights can aid in the design of effective future long-term experimental and observational efforts. long-term research on freshwaters is particularly important because of their value to humanity.
Resumo:
The paper examines the nature of qualitative empirical studies published in the AHRD proceedings from 1999-2003 and discusses findings on method, rationale for method, data collection, sampling strategies, and integrity measures.
Resumo:
This paper briefly discusses the three views that have emerged over the centuries on the ethics of tax evasion, then summarizes the results of several opinion surveys that were conducted on the topic. The paper concludes with a discussion of how these study results may be used in the classroom.
Resumo:
This paper details the research methods an introductory qualitative research class used to both study an issue related to race and identity, and to familiarize themselves with data collection strategies. Throughout the paper the authors attempt to capture the challenges, disagreements, and consensus building that marked this unusual research endeavor.
Resumo:
Electronic database handling of buisness information has gradually gained its popularity in the hospitality industry. This article provides an overview on the fundamental concepts of a hotel database and investigates the feasibility of incorporating computer-assisted data mining techniques into hospitality database applications. The author also exposes some potential myths associated with data mining in hospitaltiy database applications.
Resumo:
In a post-Cold War, post-9/11 world, the advent of US global supremacy resulted in the installation, perpetuation, and dissemination of an Absolutist Security Agenda (hereinafter, ASA). The US ASA explicitly and aggressively articulates and equates US national security interests with the security of all states in the international system, and replaced the bipolar, Cold War framework that defined international affairs from 1945-1992. Since the collapse of the USSR and the 11 September 2001 terrorist attacks, the US has unilaterally defined, implemented, and managed systemic security policy. The US ASA is indicative of a systemic category of knowledge (security) anchored in variegated conceptual and material components, such as morality, philosophy, and political rubrics. The US ASA is based on a logic that involves the following security components: (1) hyper militarization, (2) intimidation,(3) coercion, (4) criminalization, (5) panoptic surveillance, (6) plenary security measures, and (7) unabashed US interference in the domestic affairs of select states. Such interference has produced destabilizing tensions and conflicts that have, in turn, produced resistance, revolutions, proliferation, cults of personality, and militarization. This is the case because the US ASA rests on the notion that the international system of states is an extension, instrument of US power, rather than a system and/or society of states comprised of functionally sovereign entities. To analyze the US ASA, this study utilizes: (1) official government statements, legal doctrines, treaties, and policies pertaining to US foreign policy; (2) militarization rationales, budgets, and expenditures; and (3) case studies of rogue states. The data used in this study are drawn from information that is publicly available (academic journals, think-tank publications, government publications, and information provided by international organizations). The data supports the contention that global security is effectuated via a discrete set of hegemonic/imperialistic US values and interests, finding empirical expression in legal acts (USA Patriot ACT 2001) and the concept of rogue states. Rogue states, therefore, provide test cases to clarify the breadth, depth, and consequentialness of the US ASA in world affairs vis-à-vis the relationship between US security and global security.