14 resultados para Web Information Gathering, Web Personalization, Concepts
em Digital Commons at Florida International University
Resumo:
In September 2002, the State of Florida implemented a new retirement structure for those employed in the Florida Public School System. Teachers were given the option to maintain their existing defined benefit plan or choose the newly offered defined contribution plan. The variables that affect planning for retirement are innumerable. Identifying the most significant variables is essential to understanding how one plans for retirement. ^ This study examined the relationship between hypothesized psychosocial and demographic factors and an individual's level of pre-retirement planning. The criterion variable, the level of pre-retirement planning, comprised two concepts. First, the time spent thinking about retirement was determined by the score an individual received on a pre-retirement planning scale. This scale included the concepts of information gathering, goals, anticipated resources, and long-range planning. Second, implementation of retirement plan procedures was determined by the percentage an individual annually deferred to retirement. ^ The survey used for data collection contained 50 close-ended items. It was distributed to all full-time teachers in nine randomly selected elementary, middle, and senior high schools throughout Miami-Dade County Public Schools. Multiple regression and crosstabulation indicated that math anxiety, general risk, years of service, and total family income were significant predictors of the level of pre-retirement planning, as measured by the pre-retirement planning scale. In addition, the statistical analysis indicated that math anxiety, internal locus of control, years of service, and total family income were significant predictors of the level pre-retirement planning, as measured by the amount deferred to retirement. An individual's level of math anxiety and family income were the two factors that were the most significant predictors for both concepts on the level of pre-retirement planning. ^ Based on the findings of the study, recommendations focused on assessing an individual's level of math anxiety and educating teachers, particularly pre-service candidates, about the factors that affect pre-retirement planning. Further research should investigate the benefit of such educational programs. ^
Resumo:
Research on the adoption of innovations by individuals has been criticized for focusing on various factors that lead to the adoption or rejection of an innovation while ignoring important aspects of the dynamic process that takes place. Theoretical process-based models hypothesize that individuals go through consecutive stages of information gathering and decision making but do not clearly explain the mechanisms that cause an individual to leave one stage and enter the next one. Research on the dynamics of the adoption process have lacked a structurally formal and quantitative description of the process. ^ This dissertation addresses the adoption process of technological innovations from a Systems Theory perspective and assumes that individuals roam through different, not necessarily consecutive, states, determined by the levels of quantifiable state variables. It is proposed that different levels of these state variables determine the state in which potential adopters are. Various events that alter the levels of these variables can cause individuals to migrate into different states. ^ It was believed that Systems Theory could provide the required infrastructure to model the innovation adoption process, particularly applied to information technologies, in a formal, structured fashion. This dissertation assumed that an individual progressing through an adoption process could be considered a system, where the occurrence of different events affect the system's overall behavior and ultimately the adoption outcome. The research effort aimed at identifying the various states of such system and the significant events that could lead the system from one state to another. By mapping these attributes onto an “innovation adoption state space” the adoption process could be fully modeled and used to assess the status, history, and possible outcomes of a specific adoption process. ^ A group of Executive MBA students were observed as they adopted Internet-based technological innovations. The data collected were used to identify clusters in the values of the state variables and consequently define significant system states. Additionally, events were identified across the student sample that systematically moved the system from one state to another. The compilation of identified states and change-related events enabled the definition of an innovation adoption state-space model. ^
Resumo:
Menu analysis is the gathering and processing of key pieces of information to make it more manageable and understand- able. Ultimately, menu analysis allows managers to make more informed decisions about prices, costs, and items to be included on a menu. The author discusses If labor as well as food casts need to be included in menu analysis and if managers need to categorize menu items differently when doing menu analysis based on customer eating patterns.
Resumo:
Because some Web users will be able to design a template to visualize information from scratch, while other users need to automatically visualize information by changing some parameters, providing different levels of customization of the information is a desirable goal. Our system allows the automatic generation of visualizations given the semantics of the data, and the static or pre-specified visualization by creating an interface language. We address information visualization taking into consideration the Web, where the presentation of the retrieved information is a challenge. ^ We provide a model to narrow the gap between the user's way of expressing queries and database manipulation languages (SQL) without changing the system itself thus improving the query specification process. We develop a Web interface model that is integrated with the HTML language to create a powerful language that facilitates the construction of Web-based database reports. ^ As opposed to other papers, this model offers a new way of exploring databases focusing on providing Web connectivity to databases with minimal or no result buffering, formatting, or extra programming. We describe how to easily connect the database to the Web. In addition, we offer an enhanced way on viewing and exploring the contents of a database, allowing users to customize their views depending on the contents and the structure of the data. Current database front-ends typically attempt to display the database objects in a flat view making it difficult for users to grasp the contents and the structure of their result. Our model narrows the gap between databases and the Web. ^ The overall objective of this research is to construct a model that accesses different databases easily across the net and generates SQL, forms, and reports across all platforms without requiring the developer to code a complex application. This increases the speed of development. In addition, using only the Web browsers, the end-user can retrieve data from databases remotely to make necessary modifications and manipulations of data using the Web formatted forms and reports, independent of the platform, without having to open different applications, or learn to use anything but their Web browser. We introduce a strategic method to generate and construct SQL queries, enabling inexperienced users that are not well exposed to the SQL world to build syntactically and semantically a valid SQL query and to understand the retrieved data. The generated SQL query will be validated against the database schema to ensure harmless and efficient SQL execution. (Abstract shortened by UMI.)^
Resumo:
Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. ^ Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a twofold “custom wrapper” approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. ^ Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. ^ This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases. ^
Resumo:
Stable isotope analysis has emerged as one of the primary means for examining the structure and dynamics of food webs, and numerous analytical approaches are now commonly used in the field. Techniques range from simple, qualitative inferences based on the isotopic niche, to Bayesian mixing models that can be used to characterize food-web structure at multiple hierarchical levels. We provide a comprehensive review of these techniques, and thus a single reference source to help identify the most useful approaches to apply to a given data set. We structure the review around four general questions: (1) what is the trophic position of an organism in a food web?; (2) which resource pools support consumers?; (3) what additional information does relative position of consumers in isotopic space reveal about food-web structure?; and (4) what is the degree of trophic variability at the intrapopulation level? For each general question, we detail different approaches that have been applied, discussing the strengths and weaknesses of each. We conclude with a set of suggestions that transcend individual analytical approaches, and provide guidance for future applications in the field.
Resumo:
The web has emerged as a potent business channel. Yet many hospitality websites are irrelevant in a new and cluttered technical world. Knowing how to promote and advertise a website and capitalizing on available resources are the keys to success. The authors lay out a marketing plan for increasing hospitality website traffic.
Resumo:
To promote the use of bicycle transportation mode in times of increasing urban traffic congestion, Broward County Metropolitan Planning Organization funded the development of a Web-based trip planner for cyclists. This presentation demonstrates the integration of the ArcGIS Server 9.3 environment with the ArcGIS JavaScript Extension for Google Maps API and the Google Local Search Control for Maps API. This allows the use of Google mashup GIS functionality, i.e., Google local search for selection of trip start, trip destination, and intermediate waypoints, and the integration of Google Maps base layers. The ArcGIS Network Analyst extension is used for the route search, where algorithms for fastest, safest, simplest, most scenic, and shortest routes are imbedded. This presentation also describes how attributes of the underlying network sources have been combined to facilitate the search for optimized routes.
Resumo:
To promote the use of bicycle transportation mode in times of increasing urban traffic congestion, Broward County Metropolitan Planning Organization funded the development of a Web-based trip planner for cyclists. This presentation demonstrates the integration of the ArcGIS Server 9.3 environment with the ArcGIS JavaScript Extension for Google Maps API and the Google Local Search Control for Maps API. This allows the use of Google mashup GIS functionality, i.e., Google local search for selection of trip start, trip destination, and intermediate waypoints, and the integration of Google Maps base layers. The ArcGIS Network Analyst extension is used for the route search, where algorithms for fastest, safest, simplest, most scenic, and shortest routes are imbedded. This presentation also describes how attributes of the underlying network sources have been combined to facilitate the search for optimized routes.
Resumo:
This poster presentation features three route planning applications developed by the Florida International University GIS Center and the Geomatics program at the University of Florida, and outlines their context based differences. The first route planner has been developed for cyclists in three Florida counties, i.e. Miami Dade County, Broward County, and Palm Beach County. The second route planner computes safe pedestrian routes to schools and has been developed for Miami Dade County. The third route planner combines pre-compiled cultural/eco routes and point-to-point route planning for the City of Coral Gables. This poster highlights the differences in design (user interface) and implementation (routing options) between the three route planners as a result of a different application context and target audience.
Resumo:
Methods for accessing data on the Web have been the focus of active research over the past few years. In this thesis we propose a method for representing Web sites as data sources. We designed a Data Extractor data retrieval solution that allows us to define queries to Web sites and process resulting data sets. Data Extractor is being integrated into the MSemODB heterogeneous database management system. With its help database queries can be distributed over both local and Web data sources within MSemODB framework. Data Extractor treats Web sites as data sources, controlling query execution and data retrieval. It works as an intermediary between the applications and the sites. Data Extractor utilizes a two-fold "custom wrapper" approach for information retrieval. Wrappers for the majority of sites are easily built using a powerful and expressive scripting language, while complex cases are processed using Java-based wrappers that utilize specially designed library of data retrieval, parsing and Web access routines. In addition to wrapper development we thoroughly investigate issues associated with Web site selection, analysis and processing. Data Extractor is designed to act as a data retrieval server, as well as an embedded data retrieval solution. We also use it to create mobile agents that are shipped over the Internet to the client's computer to perform data retrieval on behalf of the user. This approach allows Data Extractor to distribute and scale well. This study confirms feasibility of building custom wrappers for Web sites. This approach provides accuracy of data retrieval, and power and flexibility in handling of complex cases.
Resumo:
A group of four applications including Top 20 Pedestrian Crash Locations: This application is designed to display top 20 pedestrian crash locations into both map- view and detailed information view. FDOT Crash Reporting Tool: This application is designed to simplify the usage and sharing of CAR data. The application can load raw data from CAR and display it into a web map interface. FDOT Online Document Portal: This application is designed for FDOT project managers to be able to share and manage documents through a user friendly, GIS enable web interface GIS Data Collection for Pedestrian Safety Tool: FIU-GIS Center was responsible for data collection and processing work for the project of Pedestrian Safety Tool Project. The outcome of this task is present by a simple web-GIS application design to host GIS by projects.