17 resultados para State-Space Modeling

em Digital Commons at Florida International University


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inverters play key roles in connecting sustainable energy (SE) sources to the local loads and the ac grid. Although there has been a rapid expansion in the use of renewable sources in recent years, fundamental research, on the design of inverters that are specialized for use in these systems, is still needed. Recent advances in power electronics have led to proposing new topologies and switching patterns for single-stage power conversion, which are appropriate for SE sources and energy storage devices. The current source inverter (CSI) topology, along with a newly proposed switching pattern, is capable of converting the low dc voltage to the line ac in only one stage. Simple implementation and high reliability, together with the potential advantages of higher efficiency and lower cost, turns the so-called, single-stage boost inverter (SSBI), into a viable competitor to the existing SE-based power conversion technologies.^ The dynamic model is one of the most essential requirements for performance analysis and control design of any engineering system. Thus, in order to have satisfactory operation, it is necessary to derive a dynamic model for the SSBI system. However, because of the switching behavior and nonlinear elements involved, analysis of the SSBI is a complicated task.^ This research applies the state-space averaging technique to the SSBI to develop the state-space-averaged model of the SSBI under stand-alone and grid-connected modes of operation. Then, a small-signal model is derived by means of the perturbation and linearization method. An experimental hardware set-up, including a laboratory-scaled prototype SSBI, is built and the validity of the obtained models is verified through simulation and experiments. Finally, an eigenvalue sensitivity analysis is performed to investigate the stability and dynamic behavior of the SSBI system over a typical range of operation. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Research on the adoption of innovations by individuals has been criticized for focusing on various factors that lead to the adoption or rejection of an innovation while ignoring important aspects of the dynamic process that takes place. Theoretical process-based models hypothesize that individuals go through consecutive stages of information gathering and decision making but do not clearly explain the mechanisms that cause an individual to leave one stage and enter the next one. Research on the dynamics of the adoption process have lacked a structurally formal and quantitative description of the process. ^ This dissertation addresses the adoption process of technological innovations from a Systems Theory perspective and assumes that individuals roam through different, not necessarily consecutive, states, determined by the levels of quantifiable state variables. It is proposed that different levels of these state variables determine the state in which potential adopters are. Various events that alter the levels of these variables can cause individuals to migrate into different states. ^ It was believed that Systems Theory could provide the required infrastructure to model the innovation adoption process, particularly applied to information technologies, in a formal, structured fashion. This dissertation assumed that an individual progressing through an adoption process could be considered a system, where the occurrence of different events affect the system's overall behavior and ultimately the adoption outcome. The research effort aimed at identifying the various states of such system and the significant events that could lead the system from one state to another. By mapping these attributes onto an “innovation adoption state space” the adoption process could be fully modeled and used to assess the status, history, and possible outcomes of a specific adoption process. ^ A group of Executive MBA students were observed as they adopted Internet-based technological innovations. The data collected were used to identify clusters in the values of the state variables and consequently define significant system states. Additionally, events were identified across the student sample that systematically moved the system from one state to another. The compilation of identified states and change-related events enabled the definition of an innovation adoption state-space model. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Every space launch increases the overall amount of space debris. Satellites have limited awareness of nearby objects that might pose a collision hazard. Astrometric, radiometric, and thermal models for the study of space debris in low-Earth orbit have been developed. This modeled approach proposes analysis methods that provide increased Local Area Awareness for satellites in low-Earth and geostationary orbit. Local Area Awareness is defined as the ability to detect, characterize, and extract useful information regarding resident space objects as they move through the space environment surrounding a spacecraft. The study of space debris is of critical importance to all space-faring nations. Characterization efforts are proposed using long-wave infrared sensors for space-based observations of debris objects in low-Earth orbit. Long-wave infrared sensors are commercially available and do not require solar illumination to be observed, as their received signal is temperature dependent. The characterization of debris objects through means of passive imaging techniques allows for further studies into the origination, specifications, and future trajectory of debris objects. Conclusions are made regarding the aforementioned thermal analysis as a function of debris orbit, geometry, orientation with respect to time, and material properties. Development of a thermal model permits the characterization of debris objects based upon their received long-wave infrared signals. Information regarding the material type, size, and tumble-rate of the observed debris objects are extracted. This investigation proposes the utilization of long-wave infrared radiometric models of typical debris to develop techniques for the detection and characterization of debris objects via signal analysis of unresolved imagery. Knowledge regarding the orbital type and semi-major axis of the observed debris object are extracted via astrometric analysis. This knowledge may aid in the constraint of the admissible region for the initial orbit determination process. The resultant orbital information is then fused with the radiometric characterization analysis enabling further characterization efforts of the observed debris object. This fused analysis, yielding orbital, material, and thermal properties, significantly increases a satellite's Local Area Awareness via an intimate understanding of the debris environment surrounding the spacecraft.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Every space launch increases the overall amount of space debris. Satellites have limited awareness of nearby objects that might pose a collision hazard. Astrometric, radiometric, and thermal models for the study of space debris in low-Earth orbit have been developed. This modeled approach proposes analysis methods that provide increased Local Area Awareness for satellites in low-Earth and geostationary orbit. Local Area Awareness is defined as the ability to detect, characterize, and extract useful information regarding resident space objects as they move through the space environment surrounding a spacecraft. The study of space debris is of critical importance to all space-faring nations. Characterization efforts are proposed using long-wave infrared sensors for space-based observations of debris objects in low-Earth orbit. Long-wave infrared sensors are commercially available and do not require solar illumination to be observed, as their received signal is temperature dependent. The characterization of debris objects through means of passive imaging techniques allows for further studies into the origination, specifications, and future trajectory of debris objects. Conclusions are made regarding the aforementioned thermal analysis as a function of debris orbit, geometry, orientation with respect to time, and material properties. Development of a thermal model permits the characterization of debris objects based upon their received long-wave infrared signals. Information regarding the material type, size, and tumble-rate of the observed debris objects are extracted. This investigation proposes the utilization of long-wave infrared radiometric models of typical debris to develop techniques for the detection and characterization of debris objects via signal analysis of unresolved imagery. Knowledge regarding the orbital type and semi-major axis of the observed debris object are extracted via astrometric analysis. This knowledge may aid in the constraint of the admissible region for the initial orbit determination process. The resultant orbital information is then fused with the radiometric characterization analysis enabling further characterization efforts of the observed debris object. This fused analysis, yielding orbital, material, and thermal properties, significantly increases a satellite’s Local Area Awareness via an intimate understanding of the debris environment surrounding the spacecraft.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Small errors proved catastrophic. Our purpose to remark that a very small cause which escapes our notice determined a considerable effect that we cannot fail to see, and then we say that the effect is due to chance. Small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. When dealing with any kind of electrical device specification, it is important to note that there exists a pair of test conditions that define a test: the forcing function and the limit. Forcing functions define the external operating constraints placed upon the device tested. The actual test defines how well the device responds to these constraints. Forcing inputs to threshold for example, represents the most difficult testing because this put those inputs as close as possible to the actual switching critical points and guarantees that the device will meet the Input-Output specifications. ^ Prediction becomes impossible by classical analytical analysis bounded by Newton and Euclides. We have found that non linear dynamics characteristics is the natural state of being in all circuits and devices. Opportunities exist for effective error detection in a nonlinear dynamics and chaos environment. ^ Nowadays there are a set of linear limits established around every aspect of a digital or analog circuits out of which devices are consider bad after failing the test. Deterministic chaos circuit is a fact not a possibility as it has been revived by our Ph.D. research. In practice for linear standard informational methodologies, this chaotic data product is usually undesirable and we are educated to be interested in obtaining a more regular stream of output data. ^ This Ph.D. research explored the possibilities of taking the foundation of a very well known simulation and modeling methodology, introducing nonlinear dynamics and chaos precepts, to produce a new error detector instrument able to put together streams of data scattered in space and time. Therefore, mastering deterministic chaos and changing the bad reputation of chaotic data as a potential risk for practical system status determination. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Unified Modeling Language (UML) is the most comprehensive and widely accepted object-oriented modeling language due to its multi-paradigm modeling capabilities and easy to use graphical notations, with strong international organizational support and industrial production quality tool support. However, there is a lack of precise definition of the semantics of individual UML notations as well as the relationships among multiple UML models, which often introduces incomplete and inconsistent problems for software designs in UML, especially for complex systems. Furthermore, there is a lack of methodologies to ensure a correct implementation from a given UML design. The purpose of this investigation is to verify and validate software designs in UML, and to provide dependability assurance for the realization of a UML design.^ In my research, an approach is proposed to transform UML diagrams into a semantic domain, which is a formal component-based framework. The framework I proposed consists of components and interactions through message passing, which are modeled by two-layer algebraic high-level nets and transformation rules respectively. In the transformation approach, class diagrams, state machine diagrams and activity diagrams are transformed into component models, and transformation rules are extracted from interaction diagrams. By applying transformation rules to component models, a (sub)system model of one or more scenarios can be constructed. Various techniques such as model checking, Petri net analysis techniques can be adopted to check if UML designs are complete or consistent. A new component called property parser was developed and merged into the tool SAM Parser, which realize (sub)system models automatically. The property parser generates and weaves runtime monitoring code into system implementations automatically for dependability assurance. The framework in the investigation is creative and flexible since it not only can be explored to verify and validate UML designs, but also provides an approach to build models for various scenarios. As a result of my research, several kinds of previous ignored behavioral inconsistencies can be detected.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A two-phase three-dimensional computational model of an intermediate temperature (120--190°C) proton exchange membrane (PEM) fuel cell is presented. This represents the first attempt to model PEM fuel cells employing intermediate temperature membranes, in this case, phosphoric acid doped polybenzimidazole (PBI). To date, mathematical modeling of PEM fuel cells has been restricted to low temperature operation, especially to those employing Nafion ® membranes; while research on PBI as an intermediate temperature membrane has been solely at the experimental level. This work is an advancement in the state of the art of both these fields of research. With a growing trend toward higher temperature operation of PEM fuel cells, mathematical modeling of such systems is necessary to help hasten the development of the technology and highlight areas where research should be focused.^ This mathematical model accounted for all the major transport and polarization processes occurring inside the fuel cell, including the two phase phenomenon of gas dissolution in the polymer electrolyte. Results were presented for polarization performance, flux distributions, concentration variations in both the gaseous and aqueous phases, and temperature variations for various heat management strategies. The model predictions matched well with published experimental data, and were self-consistent.^ The major finding of this research was that, due to the transport limitations imposed by the use of phosphoric acid as a doping agent, namely low solubility and diffusivity of dissolved gases and anion adsorption onto catalyst sites, the catalyst utilization is very low (∼1--2%). Significant cost savings were predicted with the use of advanced catalyst deposition techniques that would greatly reduce the eventual thickness of the catalyst layer, and subsequently improve catalyst utilization. The model also predicted that an increase in power output in the order of 50% is expected if alternative doping agents to phosphoric acid can be found, which afford better transport properties of dissolved gases, reduced anion adsorption onto catalyst sites, and which maintain stability and conductive properties at elevated temperatures.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The maturation of the public sphere in Argentina during the late nineteenth and early twentieth centuries was a critical element in the nation-building process and the overall development of the modern state. Within the context of this evolution, the discourse of disease generated intense debates that subsequently influenced policies that transformed the public spaces of Buenos Aires and facilitated state intervention within the private domains of the city’s inhabitants. Under the banner of hygiene and public health, municipal officials thus Europeanized the nation’s capital through the construction of parks and plazas and likewise utilized the press to garner support for the initiatives that would remedy the unsanitary conditions and practices of the city. Despite promises to the contrary, the improvements to the public spaces of Buenos Aires primarily benefited the porteño elite while the efforts to root out disease often targeted working-class neighborhoods. The model that reformed the public space of Buenos Aires, including its socially differentiated application of aesthetic order and public health policies, was ultimately employed throughout the Argentine Republic as the consolidated political elite rolled out its national program of material and social development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Annual Average Daily Traffic (AADT) is a critical input to many transportation analyses. By definition, AADT is the average 24-hour volume at a highway location over a full year. Traditionally, AADT is estimated using a mix of permanent and temporary traffic counts. Because field collection of traffic counts is expensive, it is usually done for only the major roads, thus leaving most of the local roads without any AADT information. However, AADTs are needed for local roads for many applications. For example, AADTs are used by state Departments of Transportation (DOTs) to calculate the crash rates of all local roads in order to identify the top five percent of hazardous locations for annual reporting to the U.S. DOT. ^ This dissertation develops a new method for estimating AADTs for local roads using travel demand modeling. A major component of the new method involves a parcel-level trip generation model that estimates the trips generated by each parcel. The model uses the tax parcel data together with the trip generation rates and equations provided by the ITE Trip Generation Report. The generated trips are then distributed to existing traffic count sites using a parcel-level trip distribution gravity model. The all-or-nothing assignment method is then used to assign the trips onto the roadway network to estimate the final AADTs. The entire process was implemented in the Cube demand modeling system with extensive spatial data processing using ArcGIS. ^ To evaluate the performance of the new method, data from several study areas in Broward County in Florida were used. The estimated AADTs were compared with those from two existing methods using actual traffic counts as the ground truths. The results show that the new method performs better than both existing methods. One limitation with the new method is that it relies on Cube which limits the number of zones to 32,000. Accordingly, a study area exceeding this limit must be partitioned into smaller areas. Because AADT estimates for roads near the boundary areas were found to be less accurate, further research could examine the best way to partition a study area to minimize the impact.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Significant improvements have been made in estimating gross primary production (GPP), ecosystem respiration (R), and net ecosystem production (NEP) from diel, “free-water” changes in dissolved oxygen (DO). Here we evaluate some of the assumptions and uncertainties that are still embedded in the technique and provide guidelines on how to estimate reliable metabolic rates from high-frequency sonde data. True whole-system estimates are often not obtained because measurements reflect an unknown zone of influence which varies over space and time. A minimum logging frequency of 30 min was sufficient to capture metabolism at the daily time scale. Higher sampling frequencies capture additional pattern in the DO data, primarily related to physical mixing. Causes behind the often large daily variability are discussed and evaluated for an oligotrophic and a eutrophic lake. Despite a 3-fold higher day-to-day variability in absolute GPP rates in the eutrophic lake, both lakes required at least 3 sonde days per week for GPP estimates to be within 20% of the weekly average. A sensitivity analysis evaluated uncertainties associated with DO measurements, piston velocity (k), and the assumption that daytime R equals nighttime R. In low productivity lakes, uncertainty in DO measurements and piston velocity strongly impacts R but has no effect on GPP or NEP. Lack of accounting for higher R during the day underestimates R and GPP but has no effect on NEP. We finally provide suggestions for future research to improve the technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

World War II profoundly impacted Florida. The military geography of the State is essential to an understanding the war. The geostrategic concerns of place and space determined that Florida would become a statewide military base. Florida's attributes of place such as climate and topography determined its use as a military academy hosting over two million soldiers, nearly 15 percent of the GI Army, the largest force the US ever raised. One-in-eight Floridians went into uniform. Equally, Florida's space on the planet made it central for both defensive and offensive strategies. The Second World War was a war of movement, and Florida was a major jump off point for US force projection world-wide, especially of air power. Florida's demography facilitated its use as a base camp for the assembly and engagement of this military power. In 1940, less than two percent of the US population lived in Florida, a quiet, barely populated backwater of the United States. But owing to its critical place and space, over the next few years it became a 65,000 square mile training ground, supply dump, and embarkation site vital to the US war effort. Because of its place astride some of the most important sea lanes in the Atlantic World, Florida was the scene of one of the few Western Hemisphere battles of the war. The militarization of Florida began long before Pearl Harbor. The pre-war buildup conformed to the US strategy of the war. The strategy of theUS was then (and remains today) one of forward defense: harden the frontier, then take the battle to the enemy, rather than fight them in North America. The policy of "Europe First," focused the main US war effort on the defeat of Hitler's Germany, evaluated to be the most dangerous enemy. In Florida were established the military forces requiring the longest time to develop, and most needed to defeat the Axis. Those were a naval aviation force for sea-borne hostilities, a heavy bombing force for reducing enemy industrial states, and an aerial logistics train for overseas supply of expeditionary campaigns. The unique Florida coastline made possible the seaborne invasion training demanded for US victory. The civilian population was employed assembling mass-produced first-generation container ships, while Floridahosted casualties, Prisoners-of-War, and transient personnel moving between the Atlantic and Pacific. By the end of hostilities and the lifting of Unlimited Emergency, officially on December 31, 1946, Floridahad become a transportation nexus. Florida accommodated a return of demobilized soldiers, a migration of displaced persons, and evolved into a modern veterans' colonia. It was instrumental in fashioning the modern US military, while remaining a center of the active National Defense establishment. Those are the themes of this work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Renewable or sustainable energy (SE) sources have attracted the attention of many countries because the power generated is environmentally friendly, and the sources are not subject to the instability of price and availability. This dissertation presents new trends in the DC-AC converters (inverters) used in renewable energy sources, particularly for photovoltaic (PV) energy systems. A review of the existing technologies is performed for both single-phase and three-phase systems, and the pros and cons of the best candidates are investigated. In many modern energy conversion systems, a DC voltage, which is provided from a SE source or energy storage device, must be boosted and converted to an AC voltage with a fixed amplitude and frequency. A novel switching pattern based on the concept of the conventional space-vector pulse-width-modulated (SVPWM) technique is developed for single-stage, boost-inverters using the topology of current source inverters (CSI). The six main switching states, and two zeros, with three switches conducting at any given instant in conventional SVPWM techniques are modified herein into three charging states and six discharging states with only two switches conducting at any given instant. The charging states are necessary in order to boost the DC input voltage. It is demonstrated that the CSI topology in conjunction with the developed switching pattern is capable of providing the required residential AC voltage from a low DC voltage of one PV panel at its rated power for both linear and nonlinear loads. In a micro-grid, the active and reactive power control and consequently voltage regulation is one of the main requirements. Therefore, the capability of the single-stage boost-inverter in controlling the active power and providing the reactive power is investigated. It is demonstrated that the injected active and reactive power can be independently controlled through two modulation indices introduced in the proposed switching algorithm. The system is capable of injecting a desirable level of reactive power, while the maximum power point tracking (MPPT) dictates the desirable active power. The developed switching pattern is experimentally verified through a laboratory scaled three-phase 200W boost-inverter for both grid-connected and stand-alone cases and the results are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The standard highway assignment model in the Florida Standard Urban Transportation Modeling Structure (FSUTMS) is based on the equilibrium traffic assignment method. This method involves running several iterations of all-or-nothing capacity-restraint assignment with an adjustment of travel time to reflect delays encountered in the associated iteration. The iterative link time adjustment process is accomplished through the Bureau of Public Roads (BPR) volume-delay equation. Since FSUTMS' traffic assignment procedure outputs daily volumes, and the input capacities are given in hourly volumes, it is necessary to convert the hourly capacities to their daily equivalents when computing the volume-to-capacity ratios used in the BPR function. The conversion is accomplished by dividing the hourly capacity by a factor called the peak-to-daily ratio, or referred to as CONFAC in FSUTMS. The ratio is computed as the highest hourly volume of a day divided by the corresponding total daily volume. ^ While several studies have indicated that CONFAC is a decreasing function of the level of congestion, a constant value is used for each facility type in the current version of FSUTMS. This ignores the different congestion level associated with each roadway and is believed to be one of the culprits of traffic assignment errors. Traffic counts data from across the state of Florida were used to calibrate CONFACs as a function of a congestion measure using the weighted least squares method. The calibrated functions were then implemented in FSUTMS through a procedure that takes advantage of the iterative nature of FSUTMS' equilibrium assignment method. ^ The assignment results based on constant and variable CONFACs were then compared against the ground counts for three selected networks. It was found that the accuracy from the two assignments was not significantly different, that the hypothesized improvement in assignment results from the variable CONFAC model was not empirically evident. It was recognized that many other factors beyond the scope and control of this study could contribute to this finding. It was recommended that further studies focus on the use of the variable CONFAC model with recalibrated parameters for the BPR function and/or with other forms of volume-delay functions. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation presents a study of the D( e, e′p)n reaction carried out at the Thomas Jefferson National Accelerator Facility (Jefferson Lab) for a set of fixed values of four-momentum transfer Q 2 = 2.1 and 0.8 (GeV/c)2 and for missing momenta pm ranging from pm = 0.03 to pm = 0.65 GeV/c. The analysis resulted in the determination of absolute D(e,e′ p)n cross sections as a function of the recoiling neutron momentum and it's scattering angle with respect to the momentum transfer [vector] q. The angular distribution was compared to various modern theoretical predictions that also included final state interactions. The data confirmed the theoretical prediction of a strong anisotropy of final state interaction contributions at Q2 of 2.1 (GeV/c)2 while at the lower Q2 value, the anisotropy was much less pronounced. At Q2 of 0.8 (GeV/c)2, theories show a large disagreement with the experimental results. The experimental momentum distribution of the bound proton inside the deuteron has been determined for the first time at a set of fixed neutron recoil angles. The momentum distribution is directly related to the ground state wave function of the deuteron in momentum space. The high momentum part of this wave function plays a crucial role in understanding the short-range part of the nucleon-nucleon force. At Q2 = 2.1 (GeV/c)2, the momentum distribution determined at small neutron recoil angles is much less affected by FSI compared to a recoil angle of 75°. In contrast, at Q2 = 0.8 (GeV/c)2 there seems to be no region with reduced FSI for larger missing momenta. Besides the statistical errors, systematic errors of about 5–6 % were included in the final results in order to account for normalization uncertainties and uncertainties in the determi- nation of kinematic veriables. The measurements were carried out using an electron beam energy of 2.8 and 4.7 GeV with beam currents between 10 to 100 &mgr; A. The scattered electrons and the ejected protons originated from a 15cm long liquid deuterium target, and were detected in conicidence with the two high resolution spectrometers of Hall A at Jefferson Lab.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. ^ There are two issues in using HLPNs—modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. ^ For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. ^ For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. ^ The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.^