25 resultados para document and text processing
em Digital Commons at Florida International University
Resumo:
This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.
Resumo:
This study investigated the effects of word prediction and text-to-speech on the narrative composition writing skills of 6, fifth-grade Hispanic boys with specific learning disabilities (SLD). A multiple baseline design across subjects was used to explore the efficacy of word prediction and text-to-speech alone and in combination on four dependent variables: writing fluency (words per minute), syntax (T-units), spelling accuracy, and overall organization (holistic scoring rubric). Data were collected and analyzed during baseline, assistive technology interventions, and at 2-, 4-, and 6-week maintenance probes. ^ Participants were equally divided into Cohorts A and B, and two separate but related studies were conducted. Throughout all phases of the study, participants wrote narrative compositions for 15-minute sessions. During baseline, participants used word processing only. During the assistive technology intervention condition, Cohort A participants used word prediction followed by word prediction with text-to-speech. Concurrently, Cohort B participants used text-to-speech followed by text-to-speech with word prediction. ^ The results of this study indicate that word prediction alone or in combination with text-to-speech has a positive effect on the narrative writing compositions of students with SLD. Overall, participants in Cohorts A and B wrote more words, more T-units, and spelled more words correctly. A sign test indicated that these perceived effects were not likely due to chance. Additionally, the quality of writing improved as measured by holistic rubric scores. When participants in Cohort B used text-to-speech alone, with the exception of spelling accuracy, inconsequential results were observed on all dependent variables. ^ This study demonstrated that word prediction alone or in combination assists students with SLD to write longer, improved-quality, narrative compositions. These results suggest that word prediction or word prediction with text-to-speech be considered as a writing support to facilitate the production of a first draft of a narrative composition. However, caution should be given to the use of text-to-speech alone as its effectiveness has not been established. Recommendations for future research include investigating the use of these technologies in other phases of the writing process, with other student populations, and with other writing styles. Further, these technologies should be investigated while integrated into classroom composition instruction. ^
Resumo:
Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.
Resumo:
Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.
Resumo:
This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.
Resumo:
This research aims at a study of the hybrid flow shop problem which has parallel batch-processing machines in one stage and discrete-processing machines in other stages to process jobs of arbitrary sizes. The objective is to minimize the makespan for a set of jobs. The problem is denoted as: FF: batch1,sj:Cmax. The problem is formulated as a mixed-integer linear program. The commercial solver, AMPL/CPLEX, is used to solve problem instances to their optimality. Experimental results show that AMPL/CPLEX requires considerable time to find the optimal solution for even a small size problem, i.e., a 6-job instance requires 2 hours in average. A bottleneck-first-decomposition heuristic (BFD) is proposed in this study to overcome the computational (time) problem encountered while using the commercial solver. The proposed BFD heuristic is inspired by the shifting bottleneck heuristic. It decomposes the entire problem into three sub-problems, and schedules the sub-problems one by one. The proposed BFD heuristic consists of four major steps: formulating sub-problems, prioritizing sub-problems, solving sub-problems and re-scheduling. For solving the sub-problems, two heuristic algorithms are proposed; one for scheduling a hybrid flow shop with discrete processing machines, and the other for scheduling parallel batching machines (single stage). Both consider job arrival and delivery times. An experiment design is conducted to evaluate the effectiveness of the proposed BFD, which is further evaluated against a set of common heuristics including a randomized greedy heuristic and five dispatching rules. The results show that the proposed BFD heuristic outperforms all these algorithms. To evaluate the quality of the heuristic solution, a procedure is developed to calculate a lower bound of makespan for the problem under study. The lower bound obtained is tighter than other bounds developed for related problems in literature. A meta-search approach based on the Genetic Algorithm concept is developed to evaluate the significance of further improving the solution obtained from the proposed BFD heuristic. The experiment indicates that it reduces the makespan by 1.93 % in average within a negligible time when problem size is less than 50 jobs.
Resumo:
Two studies investigated the influence of juror need for cognition on the systematic and heuristic processing of expert evidence. U.S. citizens reporting for jury duty in South Florida read a 15-page summary of a hostile work environment case containing expert testimony. The expert described a study she had conducted on the effects of viewing sexualized materials on men's behavior toward women. Certain methodological features of the expert's research varied across experimental conditions. In Study 1 (N = 252), the expert's study was valid, contained a confound, or included the potential for experimenter bias (internal validity) and relied on a small or large sample (sample size) of college undergraduates or trucking employees (ecological validity). When the expert's study included trucking employees, high need for cognition jurors in Study 1 rated the expert more credible and trustworthy than did low need for cognition jurors. Jurors were insensitive to variations in the study's internal validity or sample size. Juror ratings of plaintiff credibility, plaintiff trustworthiness, and study quality were positively correlated with verdict. In Study 2 (N = 162), the expert's published or unpublished study (general acceptance) was either valid or lacked an appropriate control group (internal validity) and included a sample of college undergraduates or trucking employees (ecological validity). High need for cognition jurors in Study 2 found the defendant liable more often and evaluated the expert evidence more favorably when the expert's study was internally valid than when an appropriate control group was missing. Low need for cognition jurors did not differentiate between the internally valid and invalid study. Variations in the study's general acceptance and ecological validity did not affect juror judgments. Juror ratings of expert and plaintiff credibility, plaintiff trustworthiness, and study quality were positively correlated with verdict. The present research demonstrated that the need for cognition moderates juror sensitivity to expert evidence quality and that certain message-related heuristics influence juror judgments when ability or motivation to process systematically is low. ^
Resumo:
Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. ^ This thesis describes a heterogeneous database system being developed at High-performance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii) a framework for intelligent computing and communication on the Internet applying the concepts of our work. ^
Resumo:
Moving objects database systems are the most challenging sub-category among Spatio-Temporal database systems. A database system that updates in real-time the location information of GPS-equipped moving vehicles has to meet even stricter requirements. Currently existing data storage models and indexing mechanisms work well only when the number of moving objects in the system is relatively small. This dissertation research aimed at the real-time tracking and history retrieval of massive numbers of vehicles moving on road networks. A total solution has been provided for the real-time update of the vehicles' location and motion information, range queries on current and history data, and prediction of vehicles' movement in the near future. ^ To achieve these goals, a new approach called Segmented Time Associated to Partitioned Space (STAPS) was first proposed in this dissertation for building and manipulating the indexing structures for moving objects databases. ^ Applying the STAPS approach, an indexing structure of associating a time interval tree to each road segment was developed for real-time database systems of vehicles moving on road networks. The indexing structure uses affordable storage to support real-time data updates and efficient query processing. The data update and query processing performance it provides is consistent without restrictions such as a time window or assuming linear moving trajectories. ^ An application system design based on distributed system architecture with centralized organization was developed to maximally support the proposed data and indexing structures. The suggested system architecture is highly scalable and flexible. Finally, based on a real-world application model of vehicles moving in region-wide, main issues on the implementation of such a system were addressed. ^
Resumo:
A job shop with one batch processing and several discrete machines is analyzed. Given a set of jobs, their process routes, processing requirements, and size, the objective is to schedule the jobs such that the makespan is minimized. The batch processing machine can process a batch of jobs as long as the machine capacity is not violated. The batch processing time is equal to the longest processing job in the batch. The problem under study can be represented as Jm:batch:Cmax. If no batches were formed, the scheduling problem under study reduces to the classical job shop scheduling problem (i.e. Jm:: Cmax), which is known to be NP-hard. This research extends the scheduling literature by combining Jm::Cmax with batch processing. The primary contributions are the mathematical formulation, a new network representation and several solution approaches. The problem under study is observed widely in metal working and other industries, but received limited or no attention due to its complexity. A novel network representation of the problem using disjunctive and conjunctive arcs, and a mathematical formulation are proposed to minimize the makespan. Besides that, several algorithms, like batch forming heuristics, dispatching rules, Modified Shifting Bottleneck, Tabu Search (TS) and Simulated Annealing (SA), were developed and implemented. An experimental study was conducted to evaluate the proposed heuristics, and the results were compared to those from a commercial solver (i.e., CPLEX). TS and SA, with the combination of MWKR-FF as the initial solution, gave the best solutions among all the heuristics proposed. Their results were close to CPLEX; and for some larger instances, with total operations greater than 225, they were competitive in terms of solution quality and runtime. For some larger problem instances, CPLEX was unable to report a feasible solution even after running for several hours. Between SA and the experimental study indicated that SA produced a better average Cmax for all instances. The solution approaches proposed will benefit practitioners to schedule a job shop (with both discrete and batch processing machines) more efficiently. The proposed solution approaches are easier to implement and requires short run times to solve large problem instances.
Resumo:
This dissertation investigated the relationship between the September 11, 2001 terrorist attacks and the internationalization agenda of U.S. colleges and universities. The construct, post-9/11 syndrome, is used metaphorically to delineate the apparent state of panic and disequilibrium that followed the incident. Three research questions were investigated, with two universities in the Miami-area of South Florida, one private and the other public, as qualitative case studies. The questions are: (a) How are international student advisors and administrators across two types of institutions dealing with the post-9/11 syndrome? (b) What, if any, are the differences in international education after 9/11? (c) What have been the institutional priorities in relation to international education before and after 9/11? Data-gathering methods included interviews with international student/study abroad advisors and administrators with at least 8 years of experience in the function(s) at their institutions, document and institutional data analysis. The interviews were based on the three-part scheme developed by Schuman (1982): context of experience, details of experience and reflection on the meaning of experiences. Taped interviews, researcher insights, and member checks of transcripts constituted an audit trail for this study. Key findings included a progressive decline in Fall to Fall enrollment of international students at UM by 13.05% in the 5 years after 9/11, and by 6.15% at FIU in the seven post-9/11 years. In both institutions, there was an upsurge in interest in study abroad during the same period but less than 5% of enrolled students ventured abroad annually. I summarized the themes associated with the post-9/11 environment of international education as perceived by my participants at both institutions as 3Ms, 3Ts, and 1D: Menace of Anxiety and Fear, Menace of Insularity and Insecurity, Menace of Over-Regulation and Bigotry, Trajectory of Opportunity, Trajectory of Contradictions, Trajectory of Illusion, Fatalism and Futility, and Dominance of Technology. Based on these findings, I recommended an integrated Internationalization At Home Plus Collaborative Outreach (IAHPCO) approach to internationalization that is based on a post-9/11 recalibration of national security and international education as complementary rather than diametrically opposed concepts.
Resumo:
The urban landscape of Yerevan has experienced tremendous changes since the collapse of the Soviet Union and Armenia’s independence in 1991. Domestic and foreign investments have poured into Yerevan’s building sector, converting many downtown neighborhoods into sleek modern districts that now cater to foreign investors, tourists, and the newly rich Armenian nationals. Large portions of the city’s green parks and other public spaces have been commercialized for private and exclusive use, creating zones that are accessible only to the affluent. In this dissertation I explore the rapidly transforming landscape of Yerevan and its connections to the development of contemporary Armenian national identity. This research was guided by principles of ethnographic inquiry, and I employed diverse methods, including document and archival research, structured and semi-structured interviews and content analysis of news media. I also used geographic information systems (GIS) and satellite images to represent and visualize the stark transformations of spaces in Yerevan. Informed by and contributing to three literatures—on the relationship between landscape and identity formation, on the construction of national identity, and on Soviet and post-Soviet cities—this dissertation investigates how messages about contemporary Armenian national identity are being expressed via the transforming landscape of Armenia’s national capital. In it I describe the ways in which abrupt transformations have resulted in the physical and symbolic eviction of residents, introducing fierce public debates about belonging and exclusion within the changing urban context. I demonstrate that the new additions to Yerevan’s landscape and the symbolic messages that they carry are hotly contested by many long-time residents, who struggle for inclusion of their opinions and interests in the process of re-imagining their national capital. This dissertation illustrates many of the trends that are apparent in post-Soviet and post-Socialist space, while at the same time exposing some unique characteristics of the Armenian case.
Resumo:
This paper examines the history of schema theory and how culture is incorporated into schema theory. Furthermore, the author argues that cultural schema affects students’ usage of reader-based processing and text-based processing in reading.
Resumo:
The primary focus of this dissertation is to determine the degree to which political, economic, and socio-cultural elites in Jamaica and Trinidad & Tobago influenced the development of the Caribbean Court of Justice's (CCJ) original jurisdiction. As members of the Caribbean Community (CARICOM), both states replaced their protectionist model with open regionalism at the end of the 1980s. Open regionalism was adopted to make CARICOM member states internationally competitive. Open regionalism was also expected to create a stable regional trade environment. To ensure a stable economic environment, a regional court with original jurisdiction was proposed. A six member Preparatory Committee on the Caribbean Court of Justice (PREPCOM), on which Jamaica and Trinidad & Tobago sat, was formed to draft the Agreement Establishing the Caribbean Court of Justice that would govern how the Court would interpret the Revised Treaty of Chaguaramas (RTC) and enforce judgments. ^ Through the use of qualitative research methods, namely elite interviews, document data, and text analysis, and a focus on three levels of analysis, that is, the international, regional, and domestic, three major conclusions are drawn. First, changes in the international economic environment caused Jamaica and Trinidad & Tobago to support the establishment of a regional court. Second, Jamaica had far greater influence on the final structure of the CCJ than Trinidad & Tobago. Third, it was found that in both states the political elite had the greatest influence on the development and structure of the CCJ. The economic elite followed by the socio-cultural elite were found to have a lesser impact. These findings are significant because they account for the impact of elites and elite behavior on institutions in a much-neglected category of states: the developing world.^
Resumo:
Moving objects database systems are the most challenging sub-category among Spatio-Temporal database systems. A database system that updates in real-time the location information of GPS-equipped moving vehicles has to meet even stricter requirements. Currently existing data storage models and indexing mechanisms work well only when the number of moving objects in the system is relatively small. This dissertation research aimed at the real-time tracking and history retrieval of massive numbers of vehicles moving on road networks. A total solution has been provided for the real-time update of the vehicles’ location and motion information, range queries on current and history data, and prediction of vehicles’ movement in the near future. To achieve these goals, a new approach called Segmented Time Associated to Partitioned Space (STAPS) was first proposed in this dissertation for building and manipulating the indexing structures for moving objects databases. Applying the STAPS approach, an indexing structure of associating a time interval tree to each road segment was developed for real-time database systems of vehicles moving on road networks. The indexing structure uses affordable storage to support real-time data updates and efficient query processing. The data update and query processing performance it provides is consistent without restrictions such as a time window or assuming linear moving trajectories. An application system design based on distributed system architecture with centralized organization was developed to maximally support the proposed data and indexing structures. The suggested system architecture is highly scalable and flexible. Finally, based on a real-world application model of vehicles moving in region-wide, main issues on the implementation of such a system were addressed.