17 resultados para First order traffic model
em Digital Commons at Florida International University
Resumo:
The field of chemical kinetics is an exciting and active field. The prevailing theories make a number of simplifying assumptions that do not always hold in actual cases. Another current problem concerns a development of efficient numerical algorithms for solving the master equations that arise in the description of complex reactions. The objective of the present work is to furnish a completely general and exact theory of reaction rates, in a form reminiscent of transition state theory, valid for all fluid phases and also to develop a computer program that can solve complex reactions by finding the concentrations of all participating substances as a function of time. To do so, the full quantum scattering theory is used for deriving the exact rate law, and then the resulting cumulative reaction probability is put into several equivalent forms that take into account all relativistic effects if applicable, including one that is strongly reminiscent of transition state theory, but includes corrections from scattering theory. Then two programs, one for solving complex reactions, the other for solving first order linear kinetic master equations to solve them, have been developed and tested for simple applications.
Resumo:
Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.
Resumo:
Network simulation is an indispensable tool for studying Internet-scale networks due to the heterogeneous structure, immense size and changing properties. It is crucial for network simulators to generate representative traffic, which is necessary for effectively evaluating next-generation network protocols and applications. With network simulation, we can make a distinction between foreground traffic, which is generated by the target applications the researchers intend to study and therefore must be simulated with high fidelity, and background traffic, which represents the network traffic that is generated by other applications and does not require significant accuracy. The background traffic has a significant impact on the foreground traffic, since it competes with the foreground traffic for network resources and therefore can drastically affect the behavior of the applications that produce the foreground traffic. This dissertation aims to provide a solution to meaningfully generate background traffic in three aspects. First is realism. Realistic traffic characterization plays an important role in determining the correct outcome of the simulation studies. This work starts from enhancing an existing fluid background traffic model by removing its two unrealistic assumptions. The improved model can correctly reflect the network conditions in the reverse direction of the data traffic and can reproduce the traffic burstiness observed from measurements. Second is scalability. The trade-off between accuracy and scalability is a constant theme in background traffic modeling. This work presents a fast rate-based TCP (RTCP) traffic model, which originally used analytical models to represent TCP congestion control behavior. This model outperforms other existing traffic models in that it can correctly capture the overall TCP behavior and achieve a speedup of more than two orders of magnitude over the corresponding packet-oriented simulation. Third is network-wide traffic generation. Regardless of how detailed or scalable the models are, they mainly focus on how to generate traffic on one single link, which cannot be extended easily to studies of more complicated network scenarios. This work presents a cluster-based spatio-temporal background traffic generation model that considers spatial and temporal traffic characteristics as well as their correlations. The resulting model can be used effectively for the evaluation work in network studies.
Resumo:
A two-phase three-dimensional computational model of an intermediate temperature (120--190°C) proton exchange membrane (PEM) fuel cell is presented. This represents the first attempt to model PEM fuel cells employing intermediate temperature membranes, in this case, phosphoric acid doped polybenzimidazole (PBI). To date, mathematical modeling of PEM fuel cells has been restricted to low temperature operation, especially to those employing Nafion ® membranes; while research on PBI as an intermediate temperature membrane has been solely at the experimental level. This work is an advancement in the state of the art of both these fields of research. With a growing trend toward higher temperature operation of PEM fuel cells, mathematical modeling of such systems is necessary to help hasten the development of the technology and highlight areas where research should be focused.^ This mathematical model accounted for all the major transport and polarization processes occurring inside the fuel cell, including the two phase phenomenon of gas dissolution in the polymer electrolyte. Results were presented for polarization performance, flux distributions, concentration variations in both the gaseous and aqueous phases, and temperature variations for various heat management strategies. The model predictions matched well with published experimental data, and were self-consistent.^ The major finding of this research was that, due to the transport limitations imposed by the use of phosphoric acid as a doping agent, namely low solubility and diffusivity of dissolved gases and anion adsorption onto catalyst sites, the catalyst utilization is very low (∼1--2%). Significant cost savings were predicted with the use of advanced catalyst deposition techniques that would greatly reduce the eventual thickness of the catalyst layer, and subsequently improve catalyst utilization. The model also predicted that an increase in power output in the order of 50% is expected if alternative doping agents to phosphoric acid can be found, which afford better transport properties of dissolved gases, reduced anion adsorption onto catalyst sites, and which maintain stability and conductive properties at elevated temperatures.^
Resumo:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic.^ This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.^
Resumo:
Dropout rates impacting students with high-incidence disabilities in American schools remain staggering (Bost, 2006; Hehir, 2005). Of this group, students with Emotional Behavioral Disorders (EBD) are at greatest risk. Despite the mandated national propagation of inclusion, students with EBD remain the least included and the least successful when included (Bost). Accordingly, this study investigated the potential significance of inclusive settings and other school-related variables within the context of promoting the graduation potential of students with Specific Learning Disabilities (SLD) or EBD. This mixed-methods study investigated specified school-related variables as likely dropout predictors, as well as the existence of first-order interactions among some of the variables. In addition, it portrayed the perspectives of students with SLD or EBD on the school-related variables that promote graduation. Accordingly, the sample was limited to students with SLD or EBD who had graduated or were close to graduation. For the quantitative component the numerical data were analyzed using linear and logistic regressions. For the qualitative component guided student interviews were conducted. Both strands were subsequently analyzed using Ridenour and Newman’s (2008) model where the quantitative hypotheses are tested and are later built-upon by the related qualitative meta-themes. Results indicated that a successful academic history, or obtaining passing grades was the only significant predictor of graduation potential when statistically controlling all the other variables. While at a marginal significance, results also yielded that students with SLD or EBD in inclusive settings experienced better academic results and behavioral outcomes than those in self-contained settings. Specifically, students with SLD or EBD in inclusive settings were found to be more likely to obtain passing grades and less likely to be suspended from school. Generally, the meta-themes yielded during the student interviews corroborated these findings as well as provided extensive insights on how students with disabilities view school within the context of promoting graduation. Based on the results yielded, provided the necessary academic accommodations and adaptations are in place, along with an effective behavioral program, inclusive settings can be utilized as drop-out prevention tools in special education.
Resumo:
The first chapter analizes conditional assistance programs. They generate conflicting relationships between international financial institutions (IFIs) and member countries. The experience of IFIs with conditionality in the 1990s led them to allow countries more latitude in the design of their reform programs. A reformist government does not need conditionality and it is useless if it does not want to reform. A government that faces opposition may use conditionality and the help of pro-reform lobbies as a lever to counteract anti-reform groups and succeed in implementing reforms.^ The second chapter analizes economies saddled with taxes and regulations. I consider an economy in which many taxes, subsidies, and other distortionary restrictions are in place simultaneously. If I start from an inefficient laissez-faire equilibrium because of some domestic distortion, a small trade tax or subsidy can yield a first-order welfare improvement, even if the instrument itself creates distortions of its own. This may result in "welfare paradoxes". The purpose of the chapter is to quantify the welfare effects of changes in tax rates in a small open economy. I conduct the simulation in the context of an intertemporal utility maximization framework. I apply numerical methods to the model developed by Karayalcin. I introduce changes in the tax rates and quantify both the impact on welfare, consumption and foreign assets, and the path to the new steady-state values.^ The third chapter studies the role of stock markets and adjustment costs in the international transmission of supply shocks. The analysis of the transmission of a positive supply shock that originates in one of the countries shows that on impact the shock leads to an inmediate stock market boom enjoying the technological advance, while the other country suffers from depress stock market prices as demand for its equity declines. A period of adjustment begins culminating in a steady state capital and output level that is identical to the one before the shock. The the capital stock of one country undergoes a non-monotonic adjustment. The model is tested with plausible values of the variables and the numeric results confirm the predictions of the theory.^
Resumo:
Dissolved organic matter (DOM) is one of the largest carbon reservoirs on this planet and is present in aquatic environments as a highly complex mixture of organic compounds. The Florida coastal Everglades (FCE) is one of the largest wetlands in the world. DOM in this system is an important biogeochemical component as most of the nitrogen (N) and phosphorous (P) are in organic forms. Achieving a better understanding of DOM dynamics in large coastal wetlands is critical, and a particularly important issue in the context of Everglades restoration. In this work, the environmental dynamics of surface water DOM on spatial and temporal scales was investigated. In addition, photo- and bio-reactivity of this DOM was determined, surface-to-groundwater exchange of DOM was investigated, and the size distribution of freshwater DOM in Everglades was assessed. The data show that DOM dynamics in this ecosystem are controlled by both hydrological and ecological drivers and are clearly different on spatial scales and variable seasonally. The DOM reactivity data, modeled with a multi-pool first order degradation kinetics model, found that fluorescent DOM in FCE is generally photo-reactive and bio-refractory. Yet the sequential degradation proved a “priming effect” of sunlight on the bacterial uptake and reworking of this subtropical wetland DOM. Interestingly, specific PARAFAC components were found to have different photo- and bio-degradation rates, suggesting a highly heterogeneous nature of fluorophores associated with the DOM. Surface-to-groundwater exchange of DOM was observed in different regions of the system, and compositional differences were associated with source and photo-reactivity. Lastly, the high degree of heterogeneity of DOM associated fluorophores suggested based on the degradation studies was confirmed through the EEM-PARAFAC analysis of DOM along a molecular size continuum, suggesting that the fluorescence characteristics of DOM are highly controlled by different size fractions and as such can exhibit significant differences in reactivity.
Resumo:
Chromium (Cr) is a metal of particular environmental concern, owing to its toxicity and widespread occurrence in groundwater, soil, and soil solution. A combination of hydrological, geochemical, and microbiological processes governs the subsurface migration of Cr. Little effort has been devoted to examining how these biogeochemical reactions combine with hydrologic processes influence Cr migration. This study has focused on the complex problem of predicting the Cr transport in laboratory column experiments. A 1-D reactive transport model was developed and evaluated against data obtained from laboratory column experiments. ^ A series of dynamic laboratory column experiments were conducted under abiotic and biotic conditions. Cr(III) was injected into columns packed with β-MnO 2-coated sand at different initial concentrations, variable flow rates, and at two different pore water pH (3.0 and 4.0). In biotic anaerobic column experiments Cr(VI) along with lactate was injected into columns packed with quartz sand or β-MnO2-coated sand and bacteria, Shewanella alga Simidu (BrY-MT). A mathematical model was developed which included advection-dispersion equations for the movement of Cr(III), Cr(VI), dissolved oxygen, lactate, and biomass. The model included first-order rate laws governing the adsorption of each Cr species and lactate. The equations for transport and adsorption were coupled with nonlinear equations for rate-limited oxidation-reduction reactions along with dual-monod kinetic equations. Kinetic batch experiments were conducted to determine the reduction of Cr(VI) by BrY-MT in three different substrates. Results of the column experiments with Cr(III)-containing influent solutions demonstrate that β-MnO2 effectively catalyzes the oxidation of Cr(III) to Cr(VI). For a given influent concentration and pore water velocity, oxidation rates are higher, and hence effluent concentrations of Cr(VI) are greater, at pH 4 relative to pH 3. Reduction of Cr(VI) by BrY-MT was rapid (within one hour) in columns packed with quartz sand, whereas Cr(VI) reduction by BrY-MT was delayed (57 hours) in presence of β-MnO 2-coated sand. BrY-MT grown in BHIB (brain heart infusion broth) reduced maximum amount of Cr(VI) to Cr(III) followed by TSB (tryptic soy broth) and M9 (minimum media). The comparisons of data and model results from the column experiments show that the depths associated with Cr(III) oxidation and transport within sediments of shallow aquatic systems can strongly influence trends in surface water quality. The results of this study suggests that carefully performed, laboratory column experiments is a useful tool in determining the biotransformation of redox-sensitive metals even in the presence of strong oxidant, like β-MnO2. ^
Resumo:
Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.
Resumo:
Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. ^ There are two issues in using HLPNs—modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. ^ For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. ^ For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. ^ The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.^
Resumo:
Today, smart-phones have revolutionized wireless communication industry towards an era of mobile data. To cater for the ever increasing data traffic demand, it is of utmost importance to have more spectrum resources whereby sharing under-utilized spectrum bands is an effective solution. In particular, the 4G broadband Long Term Evolution (LTE) technology and its foreseen 5G successor will benefit immensely if their operation can be extended to the under-utilized unlicensed spectrum. In this thesis, first we analyze WiFi 802.11n and LTE coexistence performance in the unlicensed spectrum considering multi-layer cell layouts through system level simulations. We consider a time division duplexing (TDD)-LTE system with an FTP traffic model for performance evaluation. Simulation results show that WiFi performance is more vulnerable to LTE interference, while LTE performance is degraded only slightly. Based on the initial findings, we propose a Q-Learning based dynamic duty cycle selection technique for configuring LTE transmission gaps, so that a satisfactory throughput is maintained both for LTE and WiFi systems. Simulation results show that the proposed approach can enhance the overall capacity performance by 19% and WiFi capacity performance by 77%, hence enabling effective coexistence of LTE and WiFi systems in the unlicensed band.
Resumo:
Dropout rates impacting students with high-incidence disabilities in American schools remain staggering (Bost, 2006; Hehir, 2005). Of this group, students with Emotional Behavioral Disorders (EBD) are at greatest risk. Despite the mandated national propagation of inclusion, students with EBD remain the least included and the least successful when included (Bost). Accordingly, this study investigated the potential significance of inclusive settings and other school-related variables within the context of promoting the graduation potential of students with Specific Learning Disabilities (SLD) or EBD. This mixed-methods study investigated specified school-related variables as likely dropout predictors, as well as the existence of first-order interactions among some of the variables. In addition, it portrayed the perspectives of students with SLD or EBD on the school-related variables that promote graduation. Accordingly, the sample was limited to students with SLD or EBD who had graduated or were close to graduation. For the quantitative component the numerical data were analyzed using linear and logistic regressions. For the qualitative component guided student interviews were conducted. Both strands were subsequently analyzed using Ridenour and Newman’s (2008) model where the quantitative hypotheses are tested and are later built-upon by the related qualitative meta-themes. Results indicated that a successful academic history, or obtaining passing grades was the only significant predictor of graduation potential when statistically controlling all the other variables. While at a marginal significance, results also yielded that students with SLD or EBD in inclusive settings experienced better academic results and behavioral outcomes than those in self-contained settings. Specifically, students with SLD or EBD in inclusive settings were found to be more likely to obtain passing grades and less likely to be suspended from school. Generally, the meta-themes yielded during the student interviews corroborated these findings as well as provided extensive insights on how students with disabilities view school within the context of promoting graduation. Based on the results yielded, provided the necessary academic accommodations and adaptations are in place, along with an effective behavioral program, inclusive settings can be utilized as drop-out prevention tools in special education.
Resumo:
The first chapter analizes conditional assistance programs. They generate conflicting relationships between international financial institutions (IFIs) and member countries. The experience of IFIs with conditionality in the 1990s led them to allow countries more latitude in the design of their reform programs. A reformist government does not need conditionality and it is useless if it does not want to reform. A government that faces opposition may use conditionality and the help of pro-reform lobbies as a lever to counteract anti-reform groups and succeed in implementing reforms. The second chapter analizes economies saddled with taxes and regulations. I consider an economy in which many taxes, subsidies, and other distortionary restrictions are in place simultaneously. If I start from an inefficient laissez-faire equilibrium because of some domestic distortion, a small trade tax or subsidy can yield a first-order welfare improvement, even if the instrument itself creates distortions of its own. This may result in "welfare paradoxes". The purpose of the chapter is to quantify the welfare effects of changes in tax rates in a small open economy. I conduct the simulation in the context of an intertemporal utility maximization framework. I apply numerical methods to the model developed by Karayalcin. I introduce changes in the tax rates and quantify both the impact on welfare, consumption and foreign assets, and the path to the new steady-state values. The third chapter studies the role of stock markets and adjustment costs in the international transmission of supply shocks. The analysis of the transmission of a positive supply shock that originates in one of the countries shows that on impact the shock leads to an inmediate stock market boom enjoying the technological advance, while the other country suffers from depress stock market prices as demand for its equity declines. A period of adjustment begins culminating in a steady state capital and output level that is identical to the one before the shock. The the capital stock of one country undergoes a non-monotonic adjustment. The model is tested with plausible values of the variables and the numeric results confirm the predictions of the theory.
Resumo:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic. This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.