11 resultados para Repeated Averages of Real-Valued Functions
em Digital Commons at Florida International University
Resumo:
College personnel are required to provide accommodations for students who are deaf and hard of hearing (D/HoH), but few empirical studies have been conducted on D/HoH students as they learn under the various accommodation conditions (sign language interpreting, SLI, real-time captioning, RTC, and both). Guided by the experiences of students who are D/HoH at Miami-Dade College (MDC) who requested RTC in addition to SLI as accommodations, the researcher adopted Merten’s transformative-emancipatory theoretical framework that values perceptions and voice of students who are D/HoH. A mixed methods design addressed two research questions: Did student learning differ for each accommodation? What did students experience while learning through accommodations? Participants included 30 students who were D/HoH (60% women). They represented MDC’s majority minority population: 10% White (non-Hispanic), 20% Black (non-Hispanic, including Haitian/Caribbean), 67% Hispanic, and 3% other. Hearing loss, ranged from severe-profound (70%) to mild-moderate (30%). All were able to communicate with American Sign Language: Learning was measured while students who were D/HoH viewed three lectures under three accommodation conditions (SLI, RTC, SLI+RTC). The learning measure was defined as the difference in pre- and post-test scores on tests of the content presented in the lectures. Using repeated measure ANOVA and ANCOVA, confounding variables of fluency in American Sign Language and literacy skills were treated as covariates. Perceptions were obtained through interviews and verbal protocol analysis that were signed, videotaped, transcribed, coded, and examined for common themes and metacognitive strategies. No statistically significant differences were found among the three accommodations on the learning measure. Students who were D/HoH expressed thoughts about five different aspects of their learning while they viewed lectures: (a) comprehending the information, (b) feeling a part of the classroom environment, (c) past experiences with an accommodation, (d) individual preferences for an accommodation, (e) suggestions for improving an accommodation. They exhibited three metacognitive strategies: (a) constructing knowledge, (b) monitoring comprehension, and (c) evaluating information. No patterns were found in the types of metacognitive strategies used for any particular accommodation. The researcher offers recommendations for flexible applications of the standard accommodations used with students who are D/HoH.
Resumo:
Crash reduction factors (CRFs) are used to estimate the potential number of traffic crashes expected to be prevented from investment in safety improvement projects. The method used to develop CRFs in Florida has been based on the commonly used before-and-after approach. This approach suffers from a widely recognized problem known as regression-to-the-mean (RTM). The Empirical Bayes (EB) method has been introduced as a means to addressing the RTM problem. This method requires the information from both the treatment and reference sites in order to predict the expected number of crashes had the safety improvement projects at the treatment sites not been implemented. The information from the reference sites is estimated from a safety performance function (SPF), which is a mathematical relationship that links crashes to traffic exposure. The objective of this dissertation was to develop the SPFs for different functional classes of the Florida State Highway System. Crash data from years 2001 through 2003 along with traffic and geometric data were used in the SPF model development. SPFs for both rural and urban roadway categories were developed. The modeling data used were based on one-mile segments that contain homogeneous traffic and geometric conditions within each segment. Segments involving intersections were excluded. The scatter plots of data show that the relationships between crashes and traffic exposure are nonlinear, that crashes increase with traffic exposure in an increasing rate. Four regression models, namely, Poisson (PRM), Negative Binomial (NBRM), zero-inflated Poisson (ZIP), and zero-inflated Negative Binomial (ZINB), were fitted to the one-mile segment records for individual roadway categories. The best model was selected for each category based on a combination of the Likelihood Ratio test, the Vuong statistical test, and the Akaike's Information Criterion (AIC). The NBRM model was found to be appropriate for only one category and the ZINB model was found to be more appropriate for six other categories. The overall results show that the Negative Binomial distribution model generally provides a better fit for the data than the Poisson distribution model. In addition, the ZINB model was found to give the best fit when the count data exhibit excess zeros and over-dispersion for most of the roadway categories. While model validation shows that most data points fall within the 95% prediction intervals of the models developed, the Pearson goodness-of-fit measure does not show statistical significance. This is expected as traffic volume is only one of the many factors contributing to the overall crash experience, and that the SPFs are to be applied in conjunction with Accident Modification Factors (AMFs) to further account for the safety impacts of major geometric features before arriving at the final crash prediction. However, with improved traffic and crash data quality, the crash prediction power of SPF models may be further improved.
Resumo:
In 2010, the American Association of State Highway and Transportation Officials (AASHTO) released a safety analysis software system known as SafetyAnalyst. SafetyAnalyst implements the empirical Bayes (EB) method, which requires the use of Safety Performance Functions (SPFs). The system is equipped with a set of national default SPFs, and the software calibrates the default SPFs to represent the agency's safety performance. However, it is recommended that agencies generate agency-specific SPFs whenever possible. Many investigators support the view that the agency-specific SPFs represent the agency data better than the national default SPFs calibrated to agency data. Furthermore, it is believed that the crash trends in Florida are different from the states whose data were used to develop the national default SPFs. In this dissertation, Florida-specific SPFs were developed using the 2008 Roadway Characteristics Inventory (RCI) data and crash and traffic data from 2007-2010 for both total and fatal and injury (FI) crashes. The data were randomly divided into two sets, one for calibration (70% of the data) and another for validation (30% of the data). The negative binomial (NB) model was used to develop the Florida-specific SPFs for each of the subtypes of roadway segments, intersections and ramps, using the calibration data. Statistical goodness-of-fit tests were performed on the calibrated models, which were then validated using the validation data set. The results were compared in order to assess the transferability of the Florida-specific SPF models. The default SafetyAnalyst SPFs were calibrated to Florida data by adjusting the national default SPFs with local calibration factors. The performance of the Florida-specific SPFs and SafetyAnalyst default SPFs calibrated to Florida data were then compared using a number of methods, including visual plots and statistical goodness-of-fit tests. The plots of SPFs against the observed crash data were used to compare the prediction performance of the two models. Three goodness-of-fit tests, represented by the mean absolute deviance (MAD), the mean square prediction error (MSPE), and Freeman-Tukey R2 (R2FT), were also used for comparison in order to identify the better-fitting model. The results showed that Florida-specific SPFs yielded better prediction performance than the national default SPFs calibrated to Florida data. The performance of Florida-specific SPFs was further compared with that of the full SPFs, which include both traffic and geometric variables, in two major applications of SPFs, i.e., crash prediction and identification of high crash locations. The results showed that both SPF models yielded very similar performance in both applications. These empirical results support the use of the flow-only SPF models adopted in SafetyAnalyst, which require much less effort to develop compared to full SPFs.
Resumo:
In 2010, the American Association of State Highway and Transportation Officials (AASHTO) released a safety analysis software system known as SafetyAnalyst. SafetyAnalyst implements the empirical Bayes (EB) method, which requires the use of Safety Performance Functions (SPFs). The system is equipped with a set of national default SPFs, and the software calibrates the default SPFs to represent the agency’s safety performance. However, it is recommended that agencies generate agency-specific SPFs whenever possible. Many investigators support the view that the agency-specific SPFs represent the agency data better than the national default SPFs calibrated to agency data. Furthermore, it is believed that the crash trends in Florida are different from the states whose data were used to develop the national default SPFs. In this dissertation, Florida-specific SPFs were developed using the 2008 Roadway Characteristics Inventory (RCI) data and crash and traffic data from 2007-2010 for both total and fatal and injury (FI) crashes. The data were randomly divided into two sets, one for calibration (70% of the data) and another for validation (30% of the data). The negative binomial (NB) model was used to develop the Florida-specific SPFs for each of the subtypes of roadway segments, intersections and ramps, using the calibration data. Statistical goodness-of-fit tests were performed on the calibrated models, which were then validated using the validation data set. The results were compared in order to assess the transferability of the Florida-specific SPF models. The default SafetyAnalyst SPFs were calibrated to Florida data by adjusting the national default SPFs with local calibration factors. The performance of the Florida-specific SPFs and SafetyAnalyst default SPFs calibrated to Florida data were then compared using a number of methods, including visual plots and statistical goodness-of-fit tests. The plots of SPFs against the observed crash data were used to compare the prediction performance of the two models. Three goodness-of-fit tests, represented by the mean absolute deviance (MAD), the mean square prediction error (MSPE), and Freeman-Tukey R2 (R2FT), were also used for comparison in order to identify the better-fitting model. The results showed that Florida-specific SPFs yielded better prediction performance than the national default SPFs calibrated to Florida data. The performance of Florida-specific SPFs was further compared with that of the full SPFs, which include both traffic and geometric variables, in two major applications of SPFs, i.e., crash prediction and identification of high crash locations. The results showed that both SPF models yielded very similar performance in both applications. These empirical results support the use of the flow-only SPF models adopted in SafetyAnalyst, which require much less effort to develop compared to full SPFs.
Resumo:
To help lawyers uncover jurors' attitudes and predict verdict, litigation experts recommend that attorneys encourage jurors to repeatedly express their attitudes during voir dire. While social cognitive literature has established that repeated expression of attitudes increases accessibility and behavior predictability, the persuasive twist on the method exercised in trials deserves empirical investigation. Only one study has examined the use of repeated expression within a legal context with the results finding that the tactic increased accessibility, but did not influence the attitude verdict relationship. This dissertation reexamines the ability of civil attitudes to predict verdict in a civil trial and investigates the use of repeated expression as a persuasive tactic utilized by both parties (Plaintiff and Defense) within a civil voir dire in an attempt to increase attitudinal strength, via accessibility, and change attitudes to better predict verdict. This project also explores potential moderators, repetition by the opposing party and the use of a forewarning, to determine their ability to counter the effects of repeated expression on attitudes and verdict.^ This dissertation project asked subjects to take on the role of jurors in a civil case. During the voir dire questioning session, the number of times the participants were solicited to express their attitudes towards litigation crisis by both parties was manipulated (one vs. five). Also manipulated was the inclusion of a forewarning statement from the plaintiff, within which mock jurors were cautioned about the repeated tactics that the defense may use to influence their attitudes. Subsequently, participants engaged in a response latency task which measured the accessibility of their attitudes towards various case-related issues. After reading a vignette of a fictitious personal injury case, participants rendered verdict decisions and responded to an attitude evaluation scale. Exploratory factor analyses, Probit regressions, and path analyses were used to analyze the data. Results indicated that the act of repeated expression influenced both the accessibility and value of litigation crisis attitudes thus increasing the attitude-verdict relationship, but only when only one party engaged in it. Furthermore, the forewarning manipulation did moderate the effect of repeated expression on attitude change and verdict, supporting our hypothesis.^
Resumo:
This study investigated the effects of repeated readings on the reading abilities of 4, third-, fourth-, and fifth-grade English language learners (ELLs) with specific learning disabilities (SLD). A multiple baseline probe design across subjects was used to explore the effects of repeated readings on four dependent variables: reading fluency (words read correctly per minute; wpm), number of errors per minute (epm), types of errors per minute, and answer to literal comprehension questions. Data were collected and analyzed during baseline, intervention, generalization probes, and maintenance probes. Throughout the baseline and intervention phases, participants read a passage aloud and received error correction feedback. During baseline, this was followed by fluency and literal comprehension question assessments. During intervention, this was followed by two oral repeated readings of the passage. Then the fluency and literal comprehension question assessments were administered. Generalization probes followed approximately 25% of all sessions and consisted of a single reading of a new passage at the same readability level. Maintenance sessions occurred 2-, 4-, and 6-weeks after the intervention ended. The results of this study indicated that repeated readings had a positive effect on the reading abilities of ELLs with SLD. Participants read more wpm, made fewer epm, and answered more literal comprehension questions correctly. Additionally, on average, generalization scores were higher in intervention than in baseline. Maintenance scores were varied when compared to the last day of intervention, however, with the exception of the number of hesitations committed per minute maintenance scores were higher than baseline means. This study demonstrated that repeated readings improved the reading abilities of ELLs with SLD and that gains were generalized to untaught passages. Maintenance probes 2-, 4-, and 6- weeks following intervention indicated that mean reading fluency, errors per minute, and correct answers to literal comprehensive questions remained above baseline levels. Future research should investigate the use of repeated readings in ELLs with SLD at various stages of reading acquisition. Further, future investigations may examine how repeated readings can be integrated into classroom instruction and assessments.
Resumo:
In the early 1990s, the U.S. lodging industry witnessed a severe shortage of debt capital as traditional lenders exited the market. During this period hotel lending was revolutionized by the emergence of real estate debt securities. The author discusses key factors which have affected the growth and development of commercial mortgage backed securities and their changing role as a significant source of debt capital to the lodging industry.
Resumo:
For the past several decades, we have experienced the tremendous growth, in both scale and scope, of real-time embedded systems, thanks largely to the advances in IC technology. However, the traditional approach to get performance boost by increasing CPU frequency has been a way of past. Researchers from both industry and academia are turning their focus to multi-core architectures for continuous improvement of computing performance. In our research, we seek to develop efficient scheduling algorithms and analysis methods in the design of real-time embedded systems on multi-core platforms. Real-time systems are the ones with the response time as critical as the logical correctness of computational results. In addition, a variety of stringent constraints such as power/energy consumption, peak temperature and reliability are also imposed to these systems. Therefore, real-time scheduling plays a critical role in design of such computing systems at the system level. We started our research by addressing timing constraints for real-time applications on multi-core platforms, and developed both partitioned and semi-partitioned scheduling algorithms to schedule fixed priority, periodic, and hard real-time tasks on multi-core platforms. Then we extended our research by taking temperature constraints into consideration. We developed a closed-form solution to capture temperature dynamics for a given periodic voltage schedule on multi-core platforms, and also developed three methods to check the feasibility of a periodic real-time schedule under peak temperature constraint. We further extended our research by incorporating the power/energy constraint with thermal awareness into our research problem. We investigated the energy estimation problem on multi-core platforms, and developed a computation efficient method to calculate the energy consumption for a given voltage schedule on a multi-core platform. In this dissertation, we present our research in details and demonstrate the effectiveness and efficiency of our approaches with extensive experimental results.
Resumo:
This study investigated the effects of repeated readings on the reading abilities of 4, third-, fourth-, and fifth-grade English language learners (ELLs) with specific learning disabilities (SLD). A multiple baseline probe design across subjects was used to explore the effects of repeated readings on four dependent variables: reading fluency (words read correctly per minute; wpm), number of errors per minute (epm), types of errors per minute, and answer to literal comprehension questions. Data were collected and analyzed during baseline, intervention, generalization probes, and maintenance probes. Throughout the baseline and intervention phases, participants read a passage aloud and received error correction feedback. During baseline, this was followed by fluency and literal comprehension question assessments. During intervention, this was followed by two oral repeated readings of the passage. Then the fluency and literal comprehension question assessments were administered. Generalization probes followed approximately 25% of all sessions and consisted of a single reading of a new passage at the same readability level. Maintenance sessions occurred 2-, 4-, and 6-weeks after the intervention ended. The results of this study indicated that repeated readings had a positive effect on the reading abilities of ELLs with SLD. Participants read more wpm, made fewer epm, and answered more literal comprehension questions correctly. Additionally, on average, generalization scores were higher in intervention than in baseline. Maintenance scores were varied when compared to the last day of intervention, however, with the exception of the number of hesitations committed per minute maintenance scores were higher than baseline means. This study demonstrated that repeated readings improved the reading abilities of ELLs with SLD and that gains were generalized to untaught passages. Maintenance probes 2-, 4-, and 6- weeks following intervention indicated that mean reading fluency, errors per minute, and correct answers to literal comprehensive questions remained above baseline levels. Future research should investigate the use of repeated readings in ELLs with SLD at various stages of reading acquisition. Further, future investigations may examine how repeated readings can be integrated into classroom instruction and assessments.
Resumo:
The future power grid will effectively utilize renewable energy resources and distributed generation to respond to energy demand while incorporating information technology and communication infrastructure for their optimum operation. This dissertation contributes to the development of real-time techniques, for wide-area monitoring and secure real-time control and operation of hybrid power systems. ^ To handle the increased level of real-time data exchange, this dissertation develops a supervisory control and data acquisition (SCADA) system that is equipped with a state estimation scheme from the real-time data. This system is verified on a specially developed laboratory-based test bed facility, as a hardware and software platform, to emulate the actual scenarios of a real hybrid power system with the highest level of similarities and capabilities to practical utility systems. It includes phasor measurements at hundreds of measurement points on the system. These measurements were obtained from especially developed laboratory based Phasor Measurement Unit (PMU) that is utilized in addition to existing commercially based PMU’s. The developed PMU was used in conjunction with the interconnected system along with the commercial PMU’s. The tested studies included a new technique for detecting the partially islanded micro grids in addition to several real-time techniques for synchronization and parameter identifications of hybrid systems. ^ Moreover, due to numerous integration of renewable energy resources through DC microgrids, this dissertation performs several practical cases for improvement of interoperability of such systems. Moreover, increased number of small and dispersed generating stations and their need to connect fast and properly into the AC grids, urged this work to explore the challenges that arise in synchronization of generators to the grid and through introduction of a Dynamic Brake system to improve the process of connecting distributed generators to the power grid.^ Real time operation and control requires data communication security. A research effort in this dissertation was developed based on Trusted Sensing Base (TSB) process for data communication security. The innovative TSB approach improves the security aspect of the power grid as a cyber-physical system. It is based on available GPS synchronization technology and provides protection against confidentiality attacks in critical power system infrastructures. ^
Resumo:
The effectiveness of an optimization algorithm can be reduced to its ability to navigate an objective function’s topology. Hybrid optimization algorithms combine various optimization algorithms using a single meta-heuristic so that the hybrid algorithm is more robust, computationally efficient, and/or accurate than the individual algorithms it is made of. This thesis proposes a novel meta-heuristic that uses search vectors to select the constituent algorithm that is appropriate for a given objective function. The hybrid is shown to perform competitively against several existing hybrid and non-hybrid optimization algorithms over a set of three hundred test cases. This thesis also proposes a general framework for evaluating the effectiveness of hybrid optimization algorithms. Finally, this thesis presents an improved Method of Characteristics Code with novel boundary conditions, which better characterizes pipelines than previous codes. This code is coupled with the hybrid optimization algorithm in order to optimize the operation of real-world piston pumps.