7 resultados para prediction accuracy

em Digital Commons at Florida International University


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The nation's freeway systems are becoming increasingly congested. A major contribution to traffic congestion on freeways is due to traffic incidents. Traffic incidents are non-recurring events such as accidents or stranded vehicles that cause a temporary roadway capacity reduction, and they can account for as much as 60 percent of all traffic congestion on freeways. One major freeway incident management strategy involves diverting traffic to avoid incident locations by relaying timely information through Intelligent Transportation Systems (ITS) devices such as dynamic message signs or real-time traveler information systems. The decision to divert traffic depends foremost on the expected duration of an incident, which is difficult to predict. In addition, the duration of an incident is affected by many contributing factors. Determining and understanding these factors can help the process of identifying and developing better strategies to reduce incident durations and alleviate traffic congestion. A number of research studies have attempted to develop models to predict incident durations, yet with limited success. ^ This dissertation research attempts to improve on this previous effort by applying data mining techniques to a comprehensive incident database maintained by the District 4 ITS Office of the Florida Department of Transportation (FDOT). Two categories of incident duration prediction models were developed: "offline" models designed for use in the performance evaluation of incident management programs, and "online" models for real-time prediction of incident duration to aid in the decision making of traffic diversion in the event of an ongoing incident. Multiple data mining analysis techniques were applied and evaluated in the research. The multiple linear regression analysis and decision tree based method were applied to develop the offline models, and the rule-based method and a tree algorithm called M5P were used to develop the online models. ^ The results show that the models in general can achieve high prediction accuracy within acceptable time intervals of the actual durations. The research also identifies some new contributing factors that have not been examined in past studies. As part of the research effort, software code was developed to implement the models in the existing software system of District 4 FDOT for actual applications. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Organizational socialization theory and university student retention literature support the concept that social integration influences new recruits' level of satisfaction with the organization and their decision to remain. This three-phase study proposes and tests a Cultural Distance Model of student retention based on Tinto's (1975) Student Integration Model, Louis' (1980) Model of Newcomer Experience, and Kuh and Love's (2000) theory relating cultural distance to departure from the organization. ^ The main proposition tested in this study was that the greater the cultural distance, the greater the likelihood of early departure from the organization. Accordingly, it was inferred that new recruits entering the university culture experience some degree of social and psychological distance. The extent of the distance correspondingly influences satisfaction with the institution and intent to remain for subsequent years. ^ The model was tested through two freshman student surveys designed to examine the effects of cultural distance on non-Hispanics at a predominantly Hispanic, urban, public university. The first survey was administered eight weeks into their first Fall semester and the second at the end of their first year. Student retention was determined through their re-enrollment for the second Fall semester. Path analysis tested the viability of the hypothesis relating cultural distance to satisfaction and retention as suggested in the model. Logistic regression tested the model's predictive power. ^ Correlations among variables were significant, accounting for 54% of variance in students' decisions to return for the second year with 96% prediction accuracy. Initial feelings of high cultural distance were related to increased dissatisfaction with social interactions and institutional choice at the end of the first year and students' intention not to re-enroll. Path analysis results supported the view that the construct of culture distance incorporates both social and psychological distance, and is composed of beliefs of institutional fit with one's cultural expectations, individual comfort with the fit, and the consequent sense of “belonging” or identifying with the institution. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The National Council Licensure Examination for Registered Nurses (NCLEX-RN) is the examination that all graduates of nursing education programs must pass to attain the title of registered nurse. Currently the NCLEX-RN passing rate is at an all-time low (81%) for first-time test takers (NCSBN, 2004); amidst a nationwide shortage of registered nurses (Glabman, 2001). Because of the critical need to supply greater numbers of professional nurses, and the potential accreditation ramifications that low NCLEX-RN passing rates can have on schools of nursing and graduates, this research study tests the effectiveness of a predictor model. This model is based upon the theoretical framework of McClusky's (1959) theory of margin (ToM), with the hope that students found to be at-risk for NCLEX-RN failure can be identified and remediated prior to taking the actual licensure examination. To date no theory based predictor model has been identified that predicts success on the NCLEX-RN. ^ The model was tested using prerequisite course grades, nursing course grades and scores on standardized examinations for the 2003 associate degree nursing graduates at a urban community college (N = 235). Success was determined through the reporting of pass on the NCLEX-RN examination by the Florida Board of Nursing. Point biserial correlations tested model assumptions regarding variable relationships, while logistic regression was used to test the model's predictive power. ^ Correlations among variables were significant and the model accounted for 66% of variance in graduates' success on the NCLEX-RN with 98% prediction accuracy. Although certain prerequisite course grades and nursing course grades were found to be significant to NCLEX-RN success, the overall model was found to be most predictive at the conclusion of the academic program of study. The inclusion of the RN Assessment Examination, taken during the final semester of course work, was the most significant predictor of NCLEX-RN success. Success on the NCLEX-RN allows graduates to work as registered nurses, reflects positively on a school's academic performance record, and supports the appropriateness of the educational program's goals and objectives. The study's findings support potential other uses of McClusky's theory of margin as a predictor of program outcome in other venues of adult education. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated the effects of word prediction and text-to-speech on the narrative composition writing skills of 6, fifth-grade Hispanic boys with specific learning disabilities (SLD). A multiple baseline design across subjects was used to explore the efficacy of word prediction and text-to-speech alone and in combination on four dependent variables: writing fluency (words per minute), syntax (T-units), spelling accuracy, and overall organization (holistic scoring rubric). Data were collected and analyzed during baseline, assistive technology interventions, and at 2-, 4-, and 6-week maintenance probes. ^ Participants were equally divided into Cohorts A and B, and two separate but related studies were conducted. Throughout all phases of the study, participants wrote narrative compositions for 15-minute sessions. During baseline, participants used word processing only. During the assistive technology intervention condition, Cohort A participants used word prediction followed by word prediction with text-to-speech. Concurrently, Cohort B participants used text-to-speech followed by text-to-speech with word prediction. ^ The results of this study indicate that word prediction alone or in combination with text-to-speech has a positive effect on the narrative writing compositions of students with SLD. Overall, participants in Cohorts A and B wrote more words, more T-units, and spelled more words correctly. A sign test indicated that these perceived effects were not likely due to chance. Additionally, the quality of writing improved as measured by holistic rubric scores. When participants in Cohort B used text-to-speech alone, with the exception of spelling accuracy, inconsequential results were observed on all dependent variables. ^ This study demonstrated that word prediction alone or in combination assists students with SLD to write longer, improved-quality, narrative compositions. These results suggest that word prediction or word prediction with text-to-speech be considered as a writing support to facilitate the production of a first draft of a narrative composition. However, caution should be given to the use of text-to-speech alone as its effectiveness has not been established. Recommendations for future research include investigating the use of these technologies in other phases of the writing process, with other student populations, and with other writing styles. Further, these technologies should be investigated while integrated into classroom composition instruction. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bankruptcy prediction has been a fruitful area of research. Univariate analysis and discriminant analysis were the first methodologies used. While they perform relatively well at correctly classifying bankrupt and nonbankrupt firms, their predictive ability has come into question over time. Univariate analysis lacks the big picture that financial distress entails. Multivariate discriminant analysis requires stringent assumptions that are violated when dealing with accounting ratios and market variables. This has led to the use of more complex models such as neural networks. While the accuracy of the predictions has improved with the use of more technical models, there is still an important point missing. Accounting ratios are the usual discriminating variables used in bankruptcy prediction. However, accounting ratios are backward-looking variables. At best, they are a current snapshot of the firm. Market variables are forward-looking variables. They are determined by discounting future outcomes. Microstructure variables, such as the bid-ask spread, also contain important information. Insiders are privy to more information that the retail investor, so if any financial distress is looming, the insiders should know before the general public. Therefore, any model in bankruptcy prediction should include market and microstructure variables. That is the focus of this dissertation. The traditional models and the newer, more technical models were tested and compared to the previous literature by employing accounting ratios, market variables, and microstructure variables. Our findings suggest that the more technical models are preferable, and that a mix of accounting and market variables are best at correctly classifying and predicting bankrupt firms. Multi-layer perceptron appears to be the most accurate model following the results. The set of best discriminating variables includes price, standard deviation of price, the bid-ask spread, net income to sale, working capital to total assets, and current liabilities to total assets.