944 resultados para State-Space Modeling
A framework for transforming, analyzing, and realizing software designs in unified modeling language
Resumo:
Unified Modeling Language (UML) is the most comprehensive and widely accepted object-oriented modeling language due to its multi-paradigm modeling capabilities and easy to use graphical notations, with strong international organizational support and industrial production quality tool support. However, there is a lack of precise definition of the semantics of individual UML notations as well as the relationships among multiple UML models, which often introduces incomplete and inconsistent problems for software designs in UML, especially for complex systems. Furthermore, there is a lack of methodologies to ensure a correct implementation from a given UML design. The purpose of this investigation is to verify and validate software designs in UML, and to provide dependability assurance for the realization of a UML design.^ In my research, an approach is proposed to transform UML diagrams into a semantic domain, which is a formal component-based framework. The framework I proposed consists of components and interactions through message passing, which are modeled by two-layer algebraic high-level nets and transformation rules respectively. In the transformation approach, class diagrams, state machine diagrams and activity diagrams are transformed into component models, and transformation rules are extracted from interaction diagrams. By applying transformation rules to component models, a (sub)system model of one or more scenarios can be constructed. Various techniques such as model checking, Petri net analysis techniques can be adopted to check if UML designs are complete or consistent. A new component called property parser was developed and merged into the tool SAM Parser, which realize (sub)system models automatically. The property parser generates and weaves runtime monitoring code into system implementations automatically for dependability assurance. The framework in the investigation is creative and flexible since it not only can be explored to verify and validate UML designs, but also provides an approach to build models for various scenarios. As a result of my research, several kinds of previous ignored behavioral inconsistencies can be detected.^
Resumo:
A two-phase three-dimensional computational model of an intermediate temperature (120--190°C) proton exchange membrane (PEM) fuel cell is presented. This represents the first attempt to model PEM fuel cells employing intermediate temperature membranes, in this case, phosphoric acid doped polybenzimidazole (PBI). To date, mathematical modeling of PEM fuel cells has been restricted to low temperature operation, especially to those employing Nafion ® membranes; while research on PBI as an intermediate temperature membrane has been solely at the experimental level. This work is an advancement in the state of the art of both these fields of research. With a growing trend toward higher temperature operation of PEM fuel cells, mathematical modeling of such systems is necessary to help hasten the development of the technology and highlight areas where research should be focused.^ This mathematical model accounted for all the major transport and polarization processes occurring inside the fuel cell, including the two phase phenomenon of gas dissolution in the polymer electrolyte. Results were presented for polarization performance, flux distributions, concentration variations in both the gaseous and aqueous phases, and temperature variations for various heat management strategies. The model predictions matched well with published experimental data, and were self-consistent.^ The major finding of this research was that, due to the transport limitations imposed by the use of phosphoric acid as a doping agent, namely low solubility and diffusivity of dissolved gases and anion adsorption onto catalyst sites, the catalyst utilization is very low (∼1--2%). Significant cost savings were predicted with the use of advanced catalyst deposition techniques that would greatly reduce the eventual thickness of the catalyst layer, and subsequently improve catalyst utilization. The model also predicted that an increase in power output in the order of 50% is expected if alternative doping agents to phosphoric acid can be found, which afford better transport properties of dissolved gases, reduced anion adsorption onto catalyst sites, and which maintain stability and conductive properties at elevated temperatures.^
Resumo:
The maturation of the public sphere in Argentina during the late nineteenth and early twentieth centuries was a critical element in the nation-building process and the overall development of the modern state. Within the context of this evolution, the discourse of disease generated intense debates that subsequently influenced policies that transformed the public spaces of Buenos Aires and facilitated state intervention within the private domains of the city’s inhabitants. Under the banner of hygiene and public health, municipal officials thus Europeanized the nation’s capital through the construction of parks and plazas and likewise utilized the press to garner support for the initiatives that would remedy the unsanitary conditions and practices of the city. Despite promises to the contrary, the improvements to the public spaces of Buenos Aires primarily benefited the porteño elite while the efforts to root out disease often targeted working-class neighborhoods. The model that reformed the public space of Buenos Aires, including its socially differentiated application of aesthetic order and public health policies, was ultimately employed throughout the Argentine Republic as the consolidated political elite rolled out its national program of material and social development.
Resumo:
Annual Average Daily Traffic (AADT) is a critical input to many transportation analyses. By definition, AADT is the average 24-hour volume at a highway location over a full year. Traditionally, AADT is estimated using a mix of permanent and temporary traffic counts. Because field collection of traffic counts is expensive, it is usually done for only the major roads, thus leaving most of the local roads without any AADT information. However, AADTs are needed for local roads for many applications. For example, AADTs are used by state Departments of Transportation (DOTs) to calculate the crash rates of all local roads in order to identify the top five percent of hazardous locations for annual reporting to the U.S. DOT. ^ This dissertation develops a new method for estimating AADTs for local roads using travel demand modeling. A major component of the new method involves a parcel-level trip generation model that estimates the trips generated by each parcel. The model uses the tax parcel data together with the trip generation rates and equations provided by the ITE Trip Generation Report. The generated trips are then distributed to existing traffic count sites using a parcel-level trip distribution gravity model. The all-or-nothing assignment method is then used to assign the trips onto the roadway network to estimate the final AADTs. The entire process was implemented in the Cube demand modeling system with extensive spatial data processing using ArcGIS. ^ To evaluate the performance of the new method, data from several study areas in Broward County in Florida were used. The estimated AADTs were compared with those from two existing methods using actual traffic counts as the ground truths. The results show that the new method performs better than both existing methods. One limitation with the new method is that it relies on Cube which limits the number of zones to 32,000. Accordingly, a study area exceeding this limit must be partitioned into smaller areas. Because AADT estimates for roads near the boundary areas were found to be less accurate, further research could examine the best way to partition a study area to minimize the impact.^
Resumo:
Significant improvements have been made in estimating gross primary production (GPP), ecosystem respiration (R), and net ecosystem production (NEP) from diel, “free-water” changes in dissolved oxygen (DO). Here we evaluate some of the assumptions and uncertainties that are still embedded in the technique and provide guidelines on how to estimate reliable metabolic rates from high-frequency sonde data. True whole-system estimates are often not obtained because measurements reflect an unknown zone of influence which varies over space and time. A minimum logging frequency of 30 min was sufficient to capture metabolism at the daily time scale. Higher sampling frequencies capture additional pattern in the DO data, primarily related to physical mixing. Causes behind the often large daily variability are discussed and evaluated for an oligotrophic and a eutrophic lake. Despite a 3-fold higher day-to-day variability in absolute GPP rates in the eutrophic lake, both lakes required at least 3 sonde days per week for GPP estimates to be within 20% of the weekly average. A sensitivity analysis evaluated uncertainties associated with DO measurements, piston velocity (k), and the assumption that daytime R equals nighttime R. In low productivity lakes, uncertainty in DO measurements and piston velocity strongly impacts R but has no effect on GPP or NEP. Lack of accounting for higher R during the day underestimates R and GPP but has no effect on NEP. We finally provide suggestions for future research to improve the technique.
Resumo:
World War II profoundly impacted Florida. The military geography of the State is essential to an understanding the war. The geostrategic concerns of place and space determined that Florida would become a statewide military base. Florida's attributes of place such as climate and topography determined its use as a military academy hosting over two million soldiers, nearly 15 percent of the GI Army, the largest force the US ever raised. One-in-eight Floridians went into uniform. Equally, Florida's space on the planet made it central for both defensive and offensive strategies. The Second World War was a war of movement, and Florida was a major jump off point for US force projection world-wide, especially of air power. Florida's demography facilitated its use as a base camp for the assembly and engagement of this military power. In 1940, less than two percent of the US population lived in Florida, a quiet, barely populated backwater of the United States. But owing to its critical place and space, over the next few years it became a 65,000 square mile training ground, supply dump, and embarkation site vital to the US war effort. Because of its place astride some of the most important sea lanes in the Atlantic World, Florida was the scene of one of the few Western Hemisphere battles of the war. The militarization of Florida began long before Pearl Harbor. The pre-war buildup conformed to the US strategy of the war. The strategy of theUS was then (and remains today) one of forward defense: harden the frontier, then take the battle to the enemy, rather than fight them in North America. The policy of "Europe First," focused the main US war effort on the defeat of Hitler's Germany, evaluated to be the most dangerous enemy. In Florida were established the military forces requiring the longest time to develop, and most needed to defeat the Axis. Those were a naval aviation force for sea-borne hostilities, a heavy bombing force for reducing enemy industrial states, and an aerial logistics train for overseas supply of expeditionary campaigns. The unique Florida coastline made possible the seaborne invasion training demanded for US victory. The civilian population was employed assembling mass-produced first-generation container ships, while Floridahosted casualties, Prisoners-of-War, and transient personnel moving between the Atlantic and Pacific. By the end of hostilities and the lifting of Unlimited Emergency, officially on December 31, 1946, Floridahad become a transportation nexus. Florida accommodated a return of demobilized soldiers, a migration of displaced persons, and evolved into a modern veterans' colonia. It was instrumental in fashioning the modern US military, while remaining a center of the active National Defense establishment. Those are the themes of this work.
Resumo:
Renewable or sustainable energy (SE) sources have attracted the attention of many countries because the power generated is environmentally friendly, and the sources are not subject to the instability of price and availability. This dissertation presents new trends in the DC-AC converters (inverters) used in renewable energy sources, particularly for photovoltaic (PV) energy systems. A review of the existing technologies is performed for both single-phase and three-phase systems, and the pros and cons of the best candidates are investigated. In many modern energy conversion systems, a DC voltage, which is provided from a SE source or energy storage device, must be boosted and converted to an AC voltage with a fixed amplitude and frequency. A novel switching pattern based on the concept of the conventional space-vector pulse-width-modulated (SVPWM) technique is developed for single-stage, boost-inverters using the topology of current source inverters (CSI). The six main switching states, and two zeros, with three switches conducting at any given instant in conventional SVPWM techniques are modified herein into three charging states and six discharging states with only two switches conducting at any given instant. The charging states are necessary in order to boost the DC input voltage. It is demonstrated that the CSI topology in conjunction with the developed switching pattern is capable of providing the required residential AC voltage from a low DC voltage of one PV panel at its rated power for both linear and nonlinear loads. In a micro-grid, the active and reactive power control and consequently voltage regulation is one of the main requirements. Therefore, the capability of the single-stage boost-inverter in controlling the active power and providing the reactive power is investigated. It is demonstrated that the injected active and reactive power can be independently controlled through two modulation indices introduced in the proposed switching algorithm. The system is capable of injecting a desirable level of reactive power, while the maximum power point tracking (MPPT) dictates the desirable active power. The developed switching pattern is experimentally verified through a laboratory scaled three-phase 200W boost-inverter for both grid-connected and stand-alone cases and the results are presented.
Resumo:
The standard highway assignment model in the Florida Standard Urban Transportation Modeling Structure (FSUTMS) is based on the equilibrium traffic assignment method. This method involves running several iterations of all-or-nothing capacity-restraint assignment with an adjustment of travel time to reflect delays encountered in the associated iteration. The iterative link time adjustment process is accomplished through the Bureau of Public Roads (BPR) volume-delay equation. Since FSUTMS' traffic assignment procedure outputs daily volumes, and the input capacities are given in hourly volumes, it is necessary to convert the hourly capacities to their daily equivalents when computing the volume-to-capacity ratios used in the BPR function. The conversion is accomplished by dividing the hourly capacity by a factor called the peak-to-daily ratio, or referred to as CONFAC in FSUTMS. The ratio is computed as the highest hourly volume of a day divided by the corresponding total daily volume. ^ While several studies have indicated that CONFAC is a decreasing function of the level of congestion, a constant value is used for each facility type in the current version of FSUTMS. This ignores the different congestion level associated with each roadway and is believed to be one of the culprits of traffic assignment errors. Traffic counts data from across the state of Florida were used to calibrate CONFACs as a function of a congestion measure using the weighted least squares method. The calibrated functions were then implemented in FSUTMS through a procedure that takes advantage of the iterative nature of FSUTMS' equilibrium assignment method. ^ The assignment results based on constant and variable CONFACs were then compared against the ground counts for three selected networks. It was found that the accuracy from the two assignments was not significantly different, that the hypothesized improvement in assignment results from the variable CONFAC model was not empirically evident. It was recognized that many other factors beyond the scope and control of this study could contribute to this finding. It was recommended that further studies focus on the use of the variable CONFAC model with recalibrated parameters for the BPR function and/or with other forms of volume-delay functions. ^
Resumo:
This dissertation presents a study of the D( e, e′p)n reaction carried out at the Thomas Jefferson National Accelerator Facility (Jefferson Lab) for a set of fixed values of four-momentum transfer Q 2 = 2.1 and 0.8 (GeV/c)2 and for missing momenta pm ranging from pm = 0.03 to pm = 0.65 GeV/c. The analysis resulted in the determination of absolute D(e,e′ p)n cross sections as a function of the recoiling neutron momentum and it's scattering angle with respect to the momentum transfer [vector] q. The angular distribution was compared to various modern theoretical predictions that also included final state interactions. The data confirmed the theoretical prediction of a strong anisotropy of final state interaction contributions at Q2 of 2.1 (GeV/c)2 while at the lower Q2 value, the anisotropy was much less pronounced. At Q2 of 0.8 (GeV/c)2, theories show a large disagreement with the experimental results. The experimental momentum distribution of the bound proton inside the deuteron has been determined for the first time at a set of fixed neutron recoil angles. The momentum distribution is directly related to the ground state wave function of the deuteron in momentum space. The high momentum part of this wave function plays a crucial role in understanding the short-range part of the nucleon-nucleon force. At Q2 = 2.1 (GeV/c)2, the momentum distribution determined at small neutron recoil angles is much less affected by FSI compared to a recoil angle of 75°. In contrast, at Q2 = 0.8 (GeV/c)2 there seems to be no region with reduced FSI for larger missing momenta. Besides the statistical errors, systematic errors of about 5–6 % were included in the final results in order to account for normalization uncertainties and uncertainties in the determi- nation of kinematic veriables. The measurements were carried out using an electron beam energy of 2.8 and 4.7 GeV with beam currents between 10 to 100 &mgr; A. The scattered electrons and the ejected protons originated from a 15cm long liquid deuterium target, and were detected in conicidence with the two high resolution spectrometers of Hall A at Jefferson Lab.^
Resumo:
Petri Nets are a formal, graphical and executable modeling technique for the specification and analysis of concurrent and distributed systems and have been widely applied in computer science and many other engineering disciplines. Low level Petri nets are simple and useful for modeling control flows but not powerful enough to define data and system functionality. High level Petri nets (HLPNs) have been developed to support data and functionality definitions, such as using complex structured data as tokens and algebraic expressions as transition formulas. Compared to low level Petri nets, HLPNs result in compact system models that are easier to be understood. Therefore, HLPNs are more useful in modeling complex systems. ^ There are two issues in using HLPNs—modeling and analysis. Modeling concerns the abstracting and representing the systems under consideration using HLPNs, and analysis deals with effective ways study the behaviors and properties of the resulting HLPN models. In this dissertation, several modeling and analysis techniques for HLPNs are studied, which are integrated into a framework that is supported by a tool. ^ For modeling, this framework integrates two formal languages: a type of HLPNs called Predicate Transition Net (PrT Net) is used to model a system's behavior and a first-order linear time temporal logic (FOLTL) to specify the system's properties. The main contribution of this dissertation with regard to modeling is to develop a software tool to support the formal modeling capabilities in this framework. ^ For analysis, this framework combines three complementary techniques, simulation, explicit state model checking and bounded model checking (BMC). Simulation is a straightforward and speedy method, but only covers some execution paths in a HLPN model. Explicit state model checking covers all the execution paths but suffers from the state explosion problem. BMC is a tradeoff as it provides a certain level of coverage while more efficient than explicit state model checking. The main contribution of this dissertation with regard to analysis is adapting BMC to analyze HLPN models and integrating the three complementary analysis techniques in a software tool to support the formal analysis capabilities in this framework. ^ The SAMTools developed for this framework in this dissertation integrates three tools: PIPE+ for HLPNs behavioral modeling and simulation, SAMAT for hierarchical structural modeling and property specification, and PIPE+Verifier for behavioral verification.^
Resumo:
The present study comparatively examined the socio-political and economic transformation of the indigenous Sámi in Sweden and the Indian American in the United States of America occurring first as a consequence of colonization and later as a product of interaction with the modern territorial and industrial state, from approximately 1500 to 1900. The first colonial encounters of the Europeans with these autochthonous populations ultimately created an imagery of the exotic Other and of the noble savage. Despite these disparaging representations, the cross-cultural settings in which these interactions took place also produced the hybrid communities and syncretic life that allowed levels of cultural accommodation, autonomous space, and indigenous agency to emerge. By the nineteenth century, however, the modern territorial and industrial state rearranges the dynamics and reaches of power across a redefined territorial sovereign space, consequently, remapping belongingness and identity. In this context, the status of indigenous peoples, as in the case of Sámi and of Indian Americans, began to change at par with industrialization and with modernity. At this point in time, indigenous populations became a hindrance to be dealt with the legal re-codification of Indigenousness into a vacuumed limbo of disenfranchisement. It is, thus, the modern territorial and industrial state that re-creates the exotic into an indigenous Other. The present research showed how the initial interaction between indigenous and Europeans changed with the emergence of the modern state, demonstrating that the nineteenth century, with its fundamental impulses of industrialism and modernity, not only excluded and marginalized indigenous populations because they were considered unfit to join modern society, it also re-conceptualized indigenous identity into a constructed authenticity.
Resumo:
In the framework of the global energy balance, the radiative energy exchanges between Sun, Earth and space are now accurately quantified from new satellite missions. Much less is known about the magnitude of the energy flows within the climate system and at the Earth surface, which cannot be directly measured by satellites. In addition to satellite observations, here we make extensive use of the growing number of surface observations to constrain the global energy balance not only from space, but also from the surface. We combine these observations with the latest modeling efforts performed for the 5th IPCC assessment report to infer best estimates for the global mean surface radiative components. Our analyses favor global mean downward surface solar and thermal radiation values near 185 and 342 Wm**-2, respectively, which are most compatible with surface observations. Combined with an estimated surface absorbed solar radiation and thermal emission of 161 Wm**-2 and 397 Wm**-2, respectively, this leaves 106 Wm**-2 of surface net radiation available for distribution amongst the non-radiative surface energy balance components. The climate models overestimate the downward solar and underestimate the downward thermal radiation, thereby simulating nevertheless an adequate global mean surface net radiation by error compensation. This also suggests that, globally, the simulated surface sensible and latent heat fluxes, around 20 and 85 Wm**-2 on average, state realistic values. The findings of this study are compiled into a new global energy balance diagram, which may be able to reconcile currently disputed inconsistencies between energy and water cycle estimates.
Resumo:
Drillhole-determined sea-ice thickness was compared with values derived remotely using a portable small-offset loop-loop steady state electromagnetic (EM) induction device during expeditions to Fram Strait and the Siberian Arctic, under typical winter and summer conditions. Simple empirical transformation equations are derived to convert measured apparent conductivity into ice thickness. Despite the extreme seasonal differences in sea-ice properties as revealed by ice core analysis, the transformation equations vary little for winter and summer. Thus, the EM induction technique operated on the ice surface in the horizontal dipole mode yields accurate results within 5 to 10% of the drillhole determined thickness over level ice in both seasons. The robustness of the induction method with respect to seasonal extremes is attributed to the low salinity of brine or meltwater filling the extensive pore space in summer. Thus, the average bulk ice conductivity for summer multiyear sea ice derived according to Archie's law amounts to 23 mS/m compared to 3 mS/m for winter conditions. These mean conductivities cause only minor differences in the EM response, as is shown by means of 1-D modeling. However, under summer conditions the range of ice conductivities is wider. Along with the widespread occurrence of surface melt ponds and freshwater lenses underneath the ice, this causes greater scatter in the apparent conductivity/ice thickness relation. This can result in higher deviations between EM-derived and drillhole determined thicknesses in summer than in winter.
Resumo:
The farm’s rural dwellings of creation from the Seridó Potiguar microregion, built in the nineteenth century, became a reference by its vernacular character, i.e. these buildings, besides having recognized relevance to the identity of the region, they are adapted to the conditions of the place in many aspects (economic, cultural, construction, physical, et.) and consist in protective spaces in relation to hostile characteristics of Seridó’s climate. Considering the above premise, the following question arises: What characteristics of the nineteenth century Seridó Potiguar’s cattle farms are crucial for them to be a protective space in relation to the semiarid climate? In order to answer the question, this research aim to identify which particularities of the Seridó’s farmhouses that contribute to adaptability in these buildings to semiarid climate, as protection environments; and contribute to the stock valuation of the architectural heritage concerned. Therefore, procedures were adopted divided into two stages. Were first identified the recurring characteristics in the studied buildings, through typological study performed from existing inventories (DINIZ, 2008; FEIJÓ, 2002; IPHAN, 2012). To define the type it worked up with the concept that merges Durand’s analytical typology that identifies the similarities and differences to classify buildings, having the character of historical survey and architectural documentation, with the definition proposed by Argan (1963) that the type is not defined a priory, but the deduction from a number of illustrative cases which have formal and functional similarity with each other. Then worked up in a sample of five different types with each other, defined by the possibility of access to the interior of the houses, proximity to other copies, good state of conservation and preservation. The contemplated farms were: Pitombeiras, Agenus e Garrotes in Acari’s town, and the municipality of Caicó, Palma and Penedo. The second stage consists of the architectural survey, photographic record, digital three-dimensional modeling (aiming to expand the existing documentation and registration) and thermal monitoring over approximately a representative day in five farmhouses, relating the thermal performance of the houses with their individual characteristics. The selected variables for analysis monitoring are based on the thermal comfort adaptive model (SPAGNOLO and DE DEAR, 2003 apud NEGREIROS, 2010). The characteristics of the houses were analyzed as meeting the passive thermal conditioning strategies recommended by NBR 15220 (ABNT, 2005), for the bioclimatic zone 7 where the municipalities of Caicó and Acari are located. The house’s analysis of the operating temperatures revealed that 90% of the times of day the environments are within the comfort range. The farmhouses, which had a higher degree of compliance with recommended bioclimatic strategies, had the best thermal performance. In environments (usually the kitchen and rooms with low ceiling heights, exposed to west radiation) which still had discomfort hours, the thermal comfort can be reached with air movement approximately 1,0 m/s.
Resumo:
The farm’s rural dwellings of creation from the Seridó Potiguar microregion, built in the nineteenth century, became a reference by its vernacular character, i.e. these buildings, besides having recognized relevance to the identity of the region, they are adapted to the conditions of the place in many aspects (economic, cultural, construction, physical, et.) and consist in protective spaces in relation to hostile characteristics of Seridó’s climate. Considering the above premise, the following question arises: What characteristics of the nineteenth century Seridó Potiguar’s cattle farms are crucial for them to be a protective space in relation to the semiarid climate? In order to answer the question, this research aim to identify which particularities of the Seridó’s farmhouses that contribute to adaptability in these buildings to semiarid climate, as protection environments; and contribute to the stock valuation of the architectural heritage concerned. Therefore, procedures were adopted divided into two stages. Were first identified the recurring characteristics in the studied buildings, through typological study performed from existing inventories (DINIZ, 2008; FEIJÓ, 2002; IPHAN, 2012). To define the type it worked up with the concept that merges Durand’s analytical typology that identifies the similarities and differences to classify buildings, having the character of historical survey and architectural documentation, with the definition proposed by Argan (1963) that the type is not defined a priory, but the deduction from a number of illustrative cases which have formal and functional similarity with each other. Then worked up in a sample of five different types with each other, defined by the possibility of access to the interior of the houses, proximity to other copies, good state of conservation and preservation. The contemplated farms were: Pitombeiras, Agenus e Garrotes in Acari’s town, and the municipality of Caicó, Palma and Penedo. The second stage consists of the architectural survey, photographic record, digital three-dimensional modeling (aiming to expand the existing documentation and registration) and thermal monitoring over approximately a representative day in five farmhouses, relating the thermal performance of the houses with their individual characteristics. The selected variables for analysis monitoring are based on the thermal comfort adaptive model (SPAGNOLO and DE DEAR, 2003 apud NEGREIROS, 2010). The characteristics of the houses were analyzed as meeting the passive thermal conditioning strategies recommended by NBR 15220 (ABNT, 2005), for the bioclimatic zone 7 where the municipalities of Caicó and Acari are located. The house’s analysis of the operating temperatures revealed that 90% of the times of day the environments are within the comfort range. The farmhouses, which had a higher degree of compliance with recommended bioclimatic strategies, had the best thermal performance. In environments (usually the kitchen and rooms with low ceiling heights, exposed to west radiation) which still had discomfort hours, the thermal comfort can be reached with air movement approximately 1,0 m/s.