23 resultados para Propagation prediction models

em Digital Commons at Florida International University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The nation's freeway systems are becoming increasingly congested. A major contribution to traffic congestion on freeways is due to traffic incidents. Traffic incidents are non-recurring events such as accidents or stranded vehicles that cause a temporary roadway capacity reduction, and they can account for as much as 60 percent of all traffic congestion on freeways. One major freeway incident management strategy involves diverting traffic to avoid incident locations by relaying timely information through Intelligent Transportation Systems (ITS) devices such as dynamic message signs or real-time traveler information systems. The decision to divert traffic depends foremost on the expected duration of an incident, which is difficult to predict. In addition, the duration of an incident is affected by many contributing factors. Determining and understanding these factors can help the process of identifying and developing better strategies to reduce incident durations and alleviate traffic congestion. A number of research studies have attempted to develop models to predict incident durations, yet with limited success. ^ This dissertation research attempts to improve on this previous effort by applying data mining techniques to a comprehensive incident database maintained by the District 4 ITS Office of the Florida Department of Transportation (FDOT). Two categories of incident duration prediction models were developed: "offline" models designed for use in the performance evaluation of incident management programs, and "online" models for real-time prediction of incident duration to aid in the decision making of traffic diversion in the event of an ongoing incident. Multiple data mining analysis techniques were applied and evaluated in the research. The multiple linear regression analysis and decision tree based method were applied to develop the offline models, and the rule-based method and a tree algorithm called M5P were used to develop the online models. ^ The results show that the models in general can achieve high prediction accuracy within acceptable time intervals of the actual durations. The research also identifies some new contributing factors that have not been examined in past studies. As part of the research effort, software code was developed to implement the models in the existing software system of District 4 FDOT for actual applications. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The composition and distribution of diatom algae inhabiting estuaries and coasts of the subtropical Americas are poorly documented, especially relative to the central role diatoms play in coastal food webs and to their potential utility as sentinels of environmental change in these threatened ecosystems. Here, we document the distribution of diatoms among the diverse habitat types and long environmental gradients represented by the shallow topographic relief of the South Florida, USA, coastline. A total of 592 species were encountered from 38 freshwater, mangrove, and marine locations in the Everglades wetland and Florida Bay during two seasonal collections, with the highest diversity occurring at sites of high salinity and low water column organic carbon concentration (WTOC). Freshwater, mangrove, and estuarine assemblages were compositionally distinct, but seasonal differences were only detected in mangrove and estuarine sites where solute concentration differed greatly between wet and dry seasons. Epiphytic, planktonic, and sediment assemblages were compositionally similar, implying a high degree of mixing along the shallow, tidal, and storm-prone coast. The relationships between diatom taxa and salinity, water total phosphorus (WTP), water total nitrogen (WTN), and WTOC concentrations were determined and incorporated into weighted averaging partial least squares regression models. Salinity was the most influential variable, resulting in a highly predictive model (r apparent 2  = 0.97, r jackknife 2  = 0.95) that can be used in the future to infer changes in coastal freshwater delivery or sea-level rise in South Florida and compositionally similar environments. Models predicting WTN (r apparent 2  = 0.75, r jackknife 2  = 0.46), WTP (r apparent 2  = 0.75, r jackknife 2  = 0.49), and WTOC (r apparent 2  = 0.79, r jackknife 2  = 0.57) were also strong, suggesting that diatoms can provide reliable inferences of changes in solute delivery to the coastal ecosystem.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We developed diatom-based prediction models of hydrology and periphyton abundance to inform assessment tools for a hydrologically managed wetland. Because hydrology is an important driver of ecosystem change, hydrologic alterations by restoration efforts could modify biological responses, such as periphyton characteristics. In karstic wetlands, diatoms are particularly important components of mat-forming calcareous periphyton assemblages that both respond and contribute to the structural organization and function of the periphyton matrix. We examined the distribution of diatoms across the Florida Everglades landscape and found hydroperiod and periphyton biovolume were strongly correlated with assemblage composition. We present species optima and tolerances for hydroperiod and periphyton biovolume, for use in interpreting the directionality of change in these important variables. Predictions of these variables were mapped to visualize landscape-scale spatial patterns in a dominant driver of change in this ecosystem (hydroperiod) and an ecosystem-level response metric of hydrologic change (periphyton biovolume). Specific diatom assemblages inhabiting periphyton mats of differing abundance can be used to infer past conditions and inform management decisions based on how assemblages are changing. This study captures diatom responses to wide gradients of hydrology and periphyton characteristics to inform ecosystem-scale bioassessment efforts in a large wetland.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The spatial and temporal distribution of modern diatom assemblages in surface sediments, on the most dominant macrophytes, and in the water column at 96 locations in Florida Bay, Biscayne Bay and adjacent regions were examined in order to develop paleoenvironmental prediction models for this region. Analyses of these distributions revealed distinct temporal and spatial differences in assemblages among the locations. The differences among diatom assemblages living on subaquatic vegetation and sediments, and in the water column were significant. Because concentrations of salts, total phosphorus (WTP), total nitrogen (WTN) and total organic carbon (WTOC) are partly controlled by water management in this region, diatom-based models were produced to assess these variables. Discriminant function analyses showed that diatoms can also be successfully used to reconstruct changes in the abundance of diatom assemblages typical for different habitats and life habits. ^ To interpret paleoenvironmental changes, changes in salinity, WTN, WTP and WTOC were inferred from diatoms preserved in sediment cores collected along environmental gradients in Florida Bay (4 cores) and from nearshore and offshore locations in Biscayne Bay (3 cores). The reconstructions showed that water quality conditions in these estuaries have been fluctuating for thousands of years due to natural processes and sea-level changes, but almost synchronized shifts in diatom assemblages occurred in the mid-1960’s at all coring locations (except Ninemile Bank and Bob Allen Bank in Florida Bay). These alterations correspond to the major construction of numerous water management structures on the mainland. Additionally, all the coring sites (except Card Sound Bank, Biscayne Bay and Trout Cove, Florida Bay) showed decreasing salinity and fluctuations in nutrient levels in the last two decades that correspond to increased rainfall in the 1990’s and increased freshwater discharge to the bays, a result of increased freshwater deliveries to the Everglades by South Florida Water Management District in the 1980’s and 1990’s. Reconstructions of the abundance of diatom assemblages typical for different habitats and life habits revealed multiple sources of diatoms to the coring locations and that epiphytic assemblages in both bays increased in abundance since the early 1990’s. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traffic from major hurricane evacuations is known to cause severe gridlocks on evacuation routes. Better prediction of the expected amount of evacuation traffic is needed to improve the decision-making process for the required evacuation routes and possible deployment of special traffic operations, such as contraflow. The objective of this dissertation is to develop prediction models to predict the number of daily trips and the evacuation distance during a hurricane evacuation. ^ Two data sets from the surveys of the evacuees from Hurricanes Katrina and Ivan were used in the models' development. The data sets included detailed information on the evacuees, including their evacuation days, evacuation distance, distance to the hurricane location, and their associated socioeconomic characteristics, including gender, age, race, household size, rental status, income, and education level. ^ Three prediction models were developed. The evacuation trip and rate models were developed using logistic regression. Together, they were used to predict the number of daily trips generated before hurricane landfall. These daily predictions allowed for more detailed planning over the traditional models, which predicted the total number of trips generated from an entire evacuation. A third model developed attempted to predict the evacuation distance using Geographically Weighted Regression (GWR), which was able to account for the spatial variations found among the different evacuation areas, in terms of impacts from the model predictors. All three models were developed using the survey data set from Hurricane Katrina and then evaluated using the survey data set from Hurricane Ivan. ^ All of the models developed provided logical results. The logistic models showed that larger households with people under age six were more likely to evacuate than smaller households. The GWR-based evacuation distance model showed that the household with children under age six, income, and proximity of household to hurricane path, all had an impact on the evacuation distances. While the models were found to provide logical results, it was recognized that they were calibrated and evaluated with relatively limited survey data. The models can be refined with additional data from future hurricane surveys, including additional variables, such as the time of day of the evacuation. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The spatial and temporal distribution of planktonic, sediment-associated and epiphytic diatoms among 58 sites in Biscayne Bay, Florida was examined in order to identify diatom taxa indicative of different salinity and water quality conditions, geographic locations and habitat types. Assessments were made in contrasting wet and dry seasons in order to develop robust assessment models for salinity and water quality for this region. We found that diatom assemblages differed between nearshore and offshore locations, especially during the wet season when salinity and nutrient gradients were steepest. In the dry season, habitat structure was primary determinant of diatom assemblage composition. Among a suite of physicochemical variables, water depth and sediment total phosphorus (STP) were most strongly associated with diatom assemblage composition in the dry season, while salinity and water total phosphorus (TP) were more important in the wet season. We used indicator species analysis (ISA) to identify taxa that were most abundant and frequent at nearshore and offshore locations, in planktonic, epiphytic and benthic habitats and in contrasting salinity and water quality regimes. Because surface water concentrations of salts, total phosphorus, nitrogen (TN) and organic carbon (TOC) are partly controlled by water management in this region, diatom-based models were produced to infer these variables in modern and retrospective assessments of management-driven changes. Weighted averaging (WA) and weighted averaging partial least squares (WA-PLS) regressions produced reliable estimates of salinity, TP, TN and TOC from diatoms (r2 = 0.92, 0.77, 0.77 and 0.71, respectively). Because of their sensitivity to salinity, nutrient and TOC concentrations diatom assemblages should be useful in developing protective nutrient criteria for estuaries and coastal waters of Florida.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this paper is to describe and discuss the current bankruptcy prediction models. This is done in the context of pros and cons of proposed models to determine the appropriate factors of failure phenomenon in cases involving restaurants that have filed for bankruptcy under Chapter 11. A sample of 11 restaurant companies that filed for bankruptcy between 1993 and 2003 were identified from the Form 8-K reported to the Securities and Exchange Commission (SEC). By applying financial ratios retrieved from the annual reports which contain, income statements, balance sheets, statements of cash flows, and statements of stockholders’ equity (or deficit) to the Atlman’s mode, Springate model, and Fulmer’s model. The study found that Atlman’s model for the non-manufacturing industry provided the most accurate bankruptcy predictions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pavement performance is one of the most important components of the pavement management system. Prediction of the future performance of a pavement section is important in programming maintenance and rehabilitation needs. Models for predicting pavement performance have been developed on the basis of traffic and age. The purpose of this research is to extend the use of a relatively new approach to performance prediction in pavement performance modeling using adaptive logic networks (ALN). Adaptive logic networks have recently emerged as an effective alternative to artificial neural networks for machine learning tasks. ^ The ALN predictive methodology is applicable to a wide variety of contexts including prediction of roughness based indices, composite rating indices and/or individual pavement distresses. The ALN program requires key information about a pavement section, including the current distress indexes, pavement age, climate region, traffic and other variables to predict yearly performance values into the future. ^ This research investigates the effect of different learning rates of the ALN in pavement performance modeling. It can be used at both the network and project level for predicting the long term performance of a road network. Results indicate that the ALN approach is well suited for pavement performance prediction modeling and shows a significant improvement over the results obtained from other artificial intelligence approaches. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A high frequency physical phase variable electric machine model was developed using FE analysis. The model was implemented in a machine drive environment with hardware-in-the-loop. The novelty of the proposed model is that it is derived based on the actual geometrical and other physical information of the motor, considering each individual turn in the winding. This is the first attempt to develop such a model to obtain high frequency machine parameters without resorting to expensive experimental procedures currently in use. The model was used in a dynamic simulation environment to predict inverter-motor interaction. This includes motor terminal overvoltage, current spikes, as well as switching effects. In addition, a complete drive model was developed for electromagnetic interference (EMI) analysis and evaluation. This consists of the lumped parameter models of different system components, such as cable, inverter, and motor. The lumped parameter models enable faster simulations. The results obtained were verified by experimental measurements and excellent agreements were obtained. A change in the winding arrangement and its influence on the motor high frequency behavior has also been investigated. This was shown to have a little effect on the parameter values and in the motor high frequency behavior for equal number of turns. An accurate prediction of overvoltage and EMI in the design stages of the drive system would reduce the time required for the design modifications as well as for the evaluation of EMC compliance issues. The model can be utilized in the design optimization and insulation selection for motors. Use of this procedure could prove economical, as it would help designers develop and test new motor designs for the evaluation of operational impacts in various motor drive applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As congestion management strategies begin to put more emphasis on person trips than vehicle trips, the need for vehicle occupancy data has become more critical. The traditional methods of collecting these data include the roadside windshield method and the carousel method. These methods are labor-intensive and expensive. An alternative to these traditional methods is to make use of the vehicle occupancy information in traffic accident records. This method is cost effective and may provide better spatial and temporal coverage than the traditional methods. However, this method is subject to potential biases resulting from under- and over-involvement of certain population sectors and certain types of accidents in traffic accident records. In this dissertation, three such potential biases, i.e., accident severity, driver’s age, and driver’s gender, were investigated and the corresponding bias factors were developed as needed. The results show that although multi-occupant vehicles are involved in higher percentages of severe accidents than are single-occupant vehicles, multi-occupant vehicles in the whole accident vehicle population were not overrepresented in the accident database. On the other hand, a significant difference was found between the distributions of the ages and genders of drivers involved in accidents and those of the general driving population. An information system that incorporates adjustments for the potential biases was developed to estimate the average vehicle occupancies (AVOs) for different types of roadways on the Florida state roadway system. A reasonableness check of the results from the system shows AVO estimates that are highly consistent with expectations. In addition, comparisons of AVOs from accident data with the field estimates show that the two data sources produce relatively consistent results. While accident records can be used to obtain the historical AVO trends and field data can be used to estimate the current AVOs, no known methods have been developed to project future AVOs. Four regression models for the purpose of predicting weekday AVOs on different levels of geographic areas and roadway types were developed as part of this dissertation. The models show that such socioeconomic factors as income, vehicle ownership, and employment have a significant impact on AVOs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation aimed to improve travel time estimation for the purpose of transportation planning by developing a travel time estimation method that incorporates the effects of signal timing plans, which were difficult to consider in planning models. For this purpose, an analytical model has been developed. The model parameters were calibrated based on data from CORSIM microscopic simulation, with signal timing plans optimized using the TRANSYT-7F software. Independent variables in the model are link length, free-flow speed, and traffic volumes from the competing turning movements. The developed model has three advantages compared to traditional link-based or node-based models. First, the model considers the influence of signal timing plans for a variety of traffic volume combinations without requiring signal timing information as input. Second, the model describes the non-uniform spatial distribution of delay along a link, this being able to estimate the impacts of queues at different upstream locations of an intersection and attribute delays to a subject link and upstream link. Third, the model shows promise of improving the accuracy of travel time prediction. The mean absolute percentage error (MAPE) of the model is 13% for a set of field data from Minnesota Department of Transportation (MDOT); this is close to the MAPE of uniform delay in the HCM 2000 method (11%). The HCM is the industrial accepted analytical model in the existing literature, but it requires signal timing information as input for calculating delays. The developed model also outperforms the HCM 2000 method for a set of Miami-Dade County data that represent congested traffic conditions, with a MAPE of 29%, compared to 31% of the HCM 2000 method. The advantages of the proposed model make it feasible for application to a large network without the burden of signal timing input, while improving the accuracy of travel time estimation. An assignment model with the developed travel time estimation method has been implemented in a South Florida planning model, which improved assignment results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bankruptcy prediction has been a fruitful area of research. Univariate analysis and discriminant analysis were the first methodologies used. While they perform relatively well at correctly classifying bankrupt and nonbankrupt firms, their predictive ability has come into question over time. Univariate analysis lacks the big picture that financial distress entails. Multivariate discriminant analysis requires stringent assumptions that are violated when dealing with accounting ratios and market variables. This has led to the use of more complex models such as neural networks. While the accuracy of the predictions has improved with the use of more technical models, there is still an important point missing. Accounting ratios are the usual discriminating variables used in bankruptcy prediction. However, accounting ratios are backward-looking variables. At best, they are a current snapshot of the firm. Market variables are forward-looking variables. They are determined by discounting future outcomes. Microstructure variables, such as the bid-ask spread, also contain important information. Insiders are privy to more information that the retail investor, so if any financial distress is looming, the insiders should know before the general public. Therefore, any model in bankruptcy prediction should include market and microstructure variables. That is the focus of this dissertation. The traditional models and the newer, more technical models were tested and compared to the previous literature by employing accounting ratios, market variables, and microstructure variables. Our findings suggest that the more technical models are preferable, and that a mix of accounting and market variables are best at correctly classifying and predicting bankrupt firms. Multi-layer perceptron appears to be the most accurate model following the results. The set of best discriminating variables includes price, standard deviation of price, the bid-ask spread, net income to sale, working capital to total assets, and current liabilities to total assets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^