13 resultados para system models
em Digital Commons - Michigan Tech
Resumo:
It is an important and difficult challenge to protect modern interconnected power system from blackouts. Applying advanced power system protection techniques and increasing power system stability are ways to improve the reliability and security of power systems. Phasor-domain software packages such as Power System Simulator for Engineers (PSS/E) can be used to study large power systems but cannot be used for transient analysis. In order to observe both power system stability and transient behavior of the system during disturbances, modeling has to be done in the time-domain. This work focuses on modeling of power systems and various control systems in the Alternative Transients Program (ATP). ATP is a time-domain power system modeling software in which all the power system components can be modeled in detail. Models are implemented with attention to component representation and parameters. The synchronous machine model includes the saturation characteristics and control interface. Transient Analysis Control System is used to model the excitation control system, power system stabilizer and the turbine governor system of the synchronous machine. Several base cases of a single machine system are modeled and benchmarked against PSS/E. A two area system is modeled and inter-area and intra-area oscillations are observed. The two area system is reduced to a two machine system using reduced dynamic equivalencing. The original and the reduced systems are benchmarked against PSS/E. This work also includes the simulation of single-pole tripping using one of the base case models. Advantages of single-pole tripping and comparison of system behavior against three-pole tripping are studied. Results indicate that the built-in control system models in PSS/E can be effectively reproduced in ATP. The benchmarked models correctly simulate the power system dynamics. The successful implementation of a dynamically reduced system in ATP shows promise for studying a small sub-system of a large system without losing the dynamic behaviors. Other aspects such as relaying can be investigated using the benchmarked models. It is expected that this work will provide guidance in modeling different control systems for the synchronous machine and in representing dynamic equivalents of large power systems.
Resumo:
In a statistical inference scenario, the estimation of target signal or its parameters is done by processing data from informative measurements. The estimation performance can be enhanced if we choose the measurements based on some criteria that help to direct our sensing resources such that the measurements are more informative about the parameter we intend to estimate. While taking multiple measurements, the measurements can be chosen online so that more information could be extracted from the data in each measurement process. This approach fits well in Bayesian inference model often used to produce successive posterior distributions of the associated parameter. We explore the sensor array processing scenario for adaptive sensing of a target parameter. The measurement choice is described by a measurement matrix that multiplies the data vector normally associated with the array signal processing. The adaptive sensing of both static and dynamic system models is done by the online selection of proper measurement matrix over time. For the dynamic system model, the target is assumed to move with some distribution and the prior distribution at each time step is changed. The information gained through adaptive sensing of the moving target is lost due to the relative shift of the target. The adaptive sensing paradigm has many similarities with compressive sensing. We have attempted to reconcile the two approaches by modifying the observation model of adaptive sensing to match the compressive sensing model for the estimation of a sparse vector.
Resumo:
The primary challenge in groundwater and contaminant transport modeling is obtaining the data needed for constructing, calibrating and testing the models. Large amounts of data are necessary for describing the hydrostratigraphy in areas with complex geology. Increasingly states are making spatial data available that can be used for input to groundwater flow models. The appropriateness of this data for large-scale flow systems has not been tested. This study focuses on modeling a plume of 1,4-dioxane in a heterogeneous aquifer system in Scio Township, Washtenaw County, Michigan. The analysis consisted of: (1) characterization of hydrogeology of the area and construction of a conceptual model based on publicly available spatial data, (2) development and calibration of a regional flow model for the site, (3) conversion of the regional model to a more highly resolved local model, (4) simulation of the dioxane plume, and (5) evaluation of the model's ability to simulate field data and estimation of the possible dioxane sources and subsequent migration until maximum concentrations are at or below the Michigan Department of Environmental Quality's residential cleanup standard for groundwater (85 ppb). MODFLOW-2000 and MT3D programs were utilized to simulate the groundwater flow and the development and movement of the 1, 4-dioxane plume, respectively. MODFLOW simulates transient groundwater flow in a quasi-3-dimensional sense, subject to a variety of boundary conditions that can simulate recharge, pumping, and surface-/groundwater interactions. MT3D simulates solute advection with groundwater flow (using the flow solution from MODFLOW), dispersion, source/sink mixing, and chemical reaction of contaminants. This modeling approach was successful at simulating the groundwater flows by calibrating recharge and hydraulic conductivities. The plume transport was adequately simulated using literature dispersivity and sorption coefficients, although the plume geometries were not well constrained.
Resumo:
In mid-July 2003, the U.S. Army Tank-Automotive & Armaments Command (TACOM) performed a series of experiments at Keweenaw Research Center (KRC), with a remote operated mine roller system. This system, named Panther Lite, consists of two M113 Armored Personnel Carriers (APC’s) connected by a Tandem Vehicle Linkage Assembly (TVLA). The system has three sets of mine rollers, two of which are connected to the front of the lead vehicle with one set trailing from the trail vehicle. Currently, the system requires two joystick controllers. One regulates the braking of the tracks, throttle, and transmission of the lead vehicle and the other controls the braking and throttle of the rear vehicle. One operator controls both joysticks, attempting to maneuver the lead vehicle along a desired path. At the same time, this operator makes compensation maneuvers to reduce lateral loads in the TVLA and to guide the rear mine rollers along the desired path. The purpose of this project is to create algorithms that would allow the slave (trail) vehicle to operate using inputs that maneuver the control (lead) vehicle. The project will be completed by first reconstructing the experimental data. Kinematic models will be generated and simulations created. The models will then be correlated with the reconstructions of the experimental data. The successful completion of this project will be a first step to eliminating the need for the second joystick.
Resumo:
A free-space optical (FSO) laser communication system with perfect fast-tracking experiences random power fading due to atmospheric turbulence. For a FSO communication system without fast-tracking or with imperfect fast-tracking, the fading probability density function (pdf) is also affected by the pointing error. In this thesis, the overall fading pdfs of FSO communication system with pointing errors are calculated using an analytical method based on the fast-tracked on-axis and off-axis fading pdfs and the fast-tracked beam profile of a turbulence channel. The overall fading pdf is firstly studied for the FSO communication system with collimated laser beam. Large-scale numerical wave-optics simulations are performed to verify the analytically calculated fading pdf with collimated beam under various turbulence channels and pointing errors. The calculated overall fading pdfs are almost identical to the directly simulated fading pdfs. The calculated overall fading pdfs are also compared with the gamma-gamma (GG) and the log-normal (LN) fading pdf models. They fit better than both the GG and LN fading pdf models under different receiver aperture sizes in all the studied cases. Further, the analytical method is expanded to the FSO communication system with beam diverging angle case. It is shown that the gamma pdf model is still valid for the fast-tracked on-axis and off-axis fading pdfs with point-like receiver aperture when the laser beam is propagated with beam diverging angle. Large-scale numerical wave-optics simulations prove that the analytically calculated fading pdfs perfectly fit the overall fading pdfs for both focused and diverged beam cases. The influence of the fast-tracked on-axis and off-axis fading pdfs, the fast-tracked beam profile, and the pointing error on the overall fading pdf is also discussed. At last, the analytical method is compared with the previous heuristic fading pdf models proposed since 1970s. Although some of previously proposed fading pdf models provide close fit to the experiment and simulation data, these close fits only exist under particular conditions. Only analytical method shows accurate fit to the directly simulated fading pdfs under different turbulence strength, propagation distances, receiver aperture sizes and pointing errors.
Resumo:
In 2005, Wetland Studies and Solutions, Inc. (WSSI) installed an extensive Low Impact Development (LID) stormwater management system on their new office site in Gainesville, Virginia. The 4-acre site is serviced by a network of LID components: permeable pavements (two proprietary and one gravel type), bioretention cell / rain garden, green roof, vegetated swale, rainwater harvesting and drip irrigation, and slow-release underground detention. The site consists of heavy clay soils, and the LID components are mostly integrated by a series of underdrain pipes. A comprehensive monitoring system has been designed and installed to measure hydrologic performance throughout the LID, underdrained network. The monitoring system measures flows into and out of each LID component independently while concurrently monitoring rainfall events. A sensitivity analysis and laboratory calibration has been performed on the flow measurement system. Field data has been evaluated to determine the hydrologic performance of the LID features. Finally, hydrologic models amenable to compact, underdrained LID sites have been reviewed and recommended for future modeling and design.
Resumo:
The report explores the problem of detecting complex point target models in a MIMO radar system. A complex point target is a mathematical and statistical model for a radar target that is not resolved in space, but exhibits varying complex reflectivity across the different bistatic view angles. The complex reflectivity can be modeled as a complex stochastic process whose index set is the set of all the bistatic view angles, and the parameters of the stochastic process follow from an analysis of a target model comprising a number of ideal point scatterers randomly located within some radius of the targets center of mass. The proposed complex point targets may be applicable to statistical inference in multistatic or MIMO radar system. Six different target models are summarized here – three 2-dimensional (Gaussian, Uniform Square, and Uniform Circle) and three 3-dimensional (Gaussian, Uniform Cube, and Uniform Sphere). They are assumed to have different distributions on the location of the point scatterers within the target. We develop data models for the received signals from such targets in the MIMO radar system with distributed assets and partially correlated signals, and consider the resulting detection problem which reduces to the familiar Gauss-Gauss detection problem. We illustrate that the target parameter and transmit signal have an influence on the detector performance through target extent and the SNR respectively. A series of the receiver operator characteristic (ROC) curves are generated to notice the impact on the detector for varying SNR. Kullback–Leibler (KL) divergence is applied to obtain the approximate mean difference between density functions the scatterers assume inside the target models to show the change in the performance of the detector with target extent of the point scatterers.
Resumo:
The objective of this doctoral research is to investigate the internal frost damage due to crystallization pore pressure in porous cement-based materials by developing computational and experimental characterization tools. As an essential component of the U.S. infrastructure system, the durability of concrete has significant impact on maintenance costs. In cold climates, freeze-thaw damage is a major issue affecting the durability of concrete. The deleterious effects of the freeze-thaw cycle depend on the microscale characteristics of concrete such as the pore sizes and the pore distribution, as well as the environmental conditions. Recent theories attribute internal frost damage of concrete is caused by crystallization pore pressure in the cold environment. The pore structures have significant impact on freeze-thaw durability of cement/concrete samples. The scanning electron microscope (SEM) and transmission X-ray microscopy (TXM) techniques were applied to characterize freeze-thaw damage within pore structure. In the microscale pore system, the crystallization pressures at sub-cooling temperatures were calculated using interface energy balance with thermodynamic analysis. The multi-phase Extended Finite Element Modeling (XFEM) and bilinear Cohesive Zone Modeling (CZM) were developed to simulate the internal frost damage of heterogeneous cement-based material samples. The fracture simulation with these two techniques were validated by comparing the predicted fracture behavior with the captured damage from compact tension (CT) and single-edge notched beam (SEB) bending tests. The study applied the developed computational tools to simulate the internal frost damage caused by ice crystallization with the two dimensional (2-D) SEM and three dimensional (3-D) reconstructed SEM and TXM digital samples. The pore pressure calculated from thermodynamic analysis was input for model simulation. The 2-D and 3-D bilinear CZM predicted the crack initiation and propagation within cement paste microstructure. The favorably predicted crack paths in concrete/cement samples indicate the developed bilinear CZM techniques have the ability to capture crack nucleation and propagation in cement-based material samples with multiphase and associated interface. By comparing the computational prediction with the actual damaged samples, it also indicates that the ice crystallization pressure is the main mechanism for the internal frost damage in cementitious materials.
Resumo:
Invasive exotic plants have altered natural ecosystems across much of North America. In the Midwest, the presence of invasive plants is increasing rapidly, causing changes in ecosystem patterns and processes. Early detection has become a key component in invasive plant management and in the detection of ecosystem change. Risk assessment through predictive modeling has been a useful resource for monitoring and assisting with treatment decisions for invasive plants. Predictive models were developed to assist with early detection of ten target invasive plants in the Great Lakes Network of the National Park Service and for garlic mustard throughout the Upper Peninsula of Michigan. These multi-criteria risk models utilize geographic information system (GIS) data to predict the areas at highest risk for three phases of invasion: introduction, establishment, and spread. An accuracy assessment of the models for the ten target plants in the Great Lakes Network showed an average overall accuracy of 86.3%. The model developed for garlic mustard in the Upper Peninsula resulted in an accuracy of 99.0%. Used as one of many resources, the risk maps created from the model outputs will assist with the detection of ecosystem change, the monitoring of plant invasions, and the management of invasive plants through prioritized control efforts.
Resumo:
This project addresses the potential impacts of changing climate on dry-season water storage and discharge from a small, mountain catchment in Tanzania. Villagers and water managers around the catchment have experienced worsening water scarcity and attribute it to increasing population and demand, but very little has been done to understand the physical characteristics and hydrological behavior of the spring catchment. The physical nature of the aquifer was characterized and water balance models were calibrated to discharge observations so as to be able to explore relative changes in aquifer storage resulting from climate changes. To characterize the shallow aquifer supplying water to the Jandu spring, water quality and geochemistry data were analyzed, discharge recession analysis was performed, and two water balance models were developed and tested. Jandu geochemistry suggests a shallow, meteorically-recharged aquifer system with short circulation times. Baseflow recession analysis showed that the catchment behavior could be represented by a linear storage model with an average recession constant of 0.151/month from 2004-2010. Two modified Thornthwaite-Mather Water Balance (TMWB) models were calibrated using historic rainfall and discharge data and shown to reproduce dry-season flows with Nash-Sutcliffe efficiencies between 0.86 and 0.91. The modified TMWB models were then used to examine the impacts of nineteen, perturbed climate scenarios to test the potential impacts of regional climate change on catchment storage during the dry season. Forcing the models with realistic scenarios for average monthly temperature, annual precipitation, and seasonal rainfall distribution demonstrated that even small climate changes might adversely impact aquifer storage conditions at the onset of the dry season. The scale of the change was dependent on the direction (increasing vs. decreasing) and magnitude of climate change (temperature and precipitation). This study demonstrates that small, mountain aquifer characterization is possible using simple water quality parameters, recession analysis can be integrated into modeling aquifer storage parameters, and water balance models can accurately reproduce dry-season discharges and might be useful tools to assess climate change impacts. However, uncertainty in current climate projections and lack of data for testing the predictive capabilities of the model beyond the present data set, make the forecasts of changes in discharge also uncertain. The hydrologic tools used herein offer promise for future research in understanding small, shallow, mountainous aquifers and could potentially be developed and used by water resource professionals to assess climatic influences on local hydrologic systems.
Resumo:
The development of embedded control systems for a Hybrid Electric Vehicle (HEV) is a challenging task due to the multidisciplinary nature of HEV powertrain and its complex structures. Hardware-In-the-Loop (HIL) simulation provides an open and convenient environment for the modeling, prototyping, testing and analyzing HEV control systems. This thesis focuses on the development of such a HIL system for the hybrid electric vehicle study. The hardware architecture of the HIL system, including dSPACE eDrive HIL simulator, MicroAutoBox II and MotoTron Engine Control Module (ECM), is introduced. Software used in the system includes dSPACE Real-Time Interface (RTI) blockset, Automotive Simulation Models (ASM), Matlab/Simulink/Stateflow, Real-time Workshop, ControlDesk Next Generation, ModelDesk and MotoHawk/MotoTune. A case study of the development of control systems for a single shaft parallel hybrid electric vehicle is presented to summarize the functionality of this HIL system.
Resumo:
Sensor networks have been an active research area in the past decade due to the variety of their applications. Many research studies have been conducted to solve the problems underlying the middleware services of sensor networks, such as self-deployment, self-localization, and synchronization. With the provided middleware services, sensor networks have grown into a mature technology to be used as a detection and surveillance paradigm for many real-world applications. The individual sensors are small in size. Thus, they can be deployed in areas with limited space to make unobstructed measurements in locations where the traditional centralized systems would have trouble to reach. However, there are a few physical limitations to sensor networks, which can prevent sensors from performing at their maximum potential. Individual sensors have limited power supply, the wireless band can get very cluttered when multiple sensors try to transmit at the same time. Furthermore, the individual sensors have limited communication range, so the network may not have a 1-hop communication topology and routing can be a problem in many cases. Carefully designed algorithms can alleviate the physical limitations of sensor networks, and allow them to be utilized to their full potential. Graphical models are an intuitive choice for designing sensor network algorithms. This thesis focuses on a classic application in sensor networks, detecting and tracking of targets. It develops feasible inference techniques for sensor networks using statistical graphical model inference, binary sensor detection, events isolation and dynamic clustering. The main strategy is to use only binary data for rough global inferences, and then dynamically form small scale clusters around the target for detailed computations. This framework is then extended to network topology manipulation, so that the framework developed can be applied to tracking in different network topology settings. Finally the system was tested in both simulation and real-world environments. The simulations were performed on various network topologies, from regularly distributed networks to randomly distributed networks. The results show that the algorithm performs well in randomly distributed networks, and hence requires minimum deployment effort. The experiments were carried out in both corridor and open space settings. A in-home falling detection system was simulated with real-world settings, it was setup with 30 bumblebee radars and 30 ultrasonic sensors driven by TI EZ430-RF2500 boards scanning a typical 800 sqft apartment. Bumblebee radars are calibrated to detect the falling of human body, and the two-tier tracking algorithm is used on the ultrasonic sensors to track the location of the elderly people.
Resumo:
File system security is fundamental to the security of UNIX and Linux systems since in these systems almost everything is in the form of a file. To protect the system files and other sensitive user files from unauthorized accesses, certain security schemes are chosen and used by different organizations in their computer systems. A file system security model provides a formal description of a protection system. Each security model is associated with specified security policies which focus on one or more of the security principles: confidentiality, integrity and availability. The security policy is not only about “who” can access an object, but also about “how” a subject can access an object. To enforce the security policies, each access request is checked against the specified policies to decide whether it is allowed or rejected. The current protection schemes in UNIX/Linux systems focus on the access control. Besides the basic access control scheme of the system itself, which includes permission bits, setuid and seteuid mechanism and the root, there are other protection models, such as Capabilities, Domain Type Enforcement (DTE) and Role-Based Access Control (RBAC), supported and used in certain organizations. These models protect the confidentiality of the data directly. The integrity of the data is protected indirectly by only allowing trusted users to operate on the objects. The access control decisions of these models depend on either the identity of the user or the attributes of the process the user can execute, and the attributes of the objects. Adoption of these sophisticated models has been slow; this is likely due to the enormous complexity of specifying controls over a large file system and the need for system administrators to learn a new paradigm for file protection. We propose a new security model: file system firewall. It is an adoption of the familiar network firewall protection model, used to control the data that flows between networked computers, toward file system protection. This model can support decisions of access control based on any system generated attributes about the access requests, e.g., time of day. The access control decisions are not on one entity, such as the account in traditional discretionary access control or the domain name in DTE. In file system firewall, the access decisions are made upon situations on multiple entities. A situation is programmable with predicates on the attributes of subject, object and the system. File system firewall specifies the appropriate actions on these situations. We implemented the prototype of file system firewall on SUSE Linux. Preliminary results of performance tests on the prototype indicate that the runtime overhead is acceptable. We compared file system firewall with TE in SELinux to show that firewall model can accommodate many other access control models. Finally, we show the ease of use of firewall model. When firewall system is restricted to specified part of the system, all the other resources are not affected. This enables a relatively smooth adoption. This fact and that it is a familiar model to system administrators will facilitate adoption and correct use. The user study we conducted on traditional UNIX access control, SELinux and file system firewall confirmed that. The beginner users found it easier to use and faster to learn then traditional UNIX access control scheme and SELinux.