3 resultados para agent based modeling
em Duke University
Resumo:
<p>With increasing prevalence and capabilities of autonomous systems as part of complex heterogeneous manned-unmanned environments (HMUEs), an important consideration is the impact of the introduction of automation on the optimal assignment of human personnel. The US Navy has implemented optimal staffing techniques before in the 1990's and 2000's with a "minimal staffing" approach. The results were poor, leading to the degradation of Naval preparedness. Clearly, another approach to determining optimal staffing is necessary. To this end, the goal of this research is to develop human performance models for use in determining optimal manning of HMUEs. The human performance models are developed using an agent-based simulation of the aircraft carrier flight deck, a representative safety-critical HMUE. The Personnel Multi-Agent Safety and Control Simulation (PMASCS) simulates and analyzes the effects of introducing generalized maintenance crew skill sets and accelerated failure repair times on the overall performance and safety of the carrier flight deck. A behavioral model of four operator types (ordnance officers, chocks and chains, fueling officers, plane captains, and maintenance operators) is presented here along with an aircraft failure model. The main focus of this work is on the maintenance operators and aircraft failure modeling, since they have a direct impact on total launch time, a primary metric for carrier deck performance. With PMASCS I explore the effects of two variables on total launch time of 22 aircraft: 1) skill level of maintenance operators and 2) aircraft failure repair times while on the catapult (referred to as Phase 4 repair times). It is found that neither introducing a generic skill set to maintenance crews nor introducing a technology to accelerate Phase 4 aircraft repair times improves the average total launch time of 22 aircraft. An optimal manning level of 3 maintenance crews is found under all conditions, the point at which any additional maintenance crews does not reduce the total launch time. An additional discussion is included about how these results change if the operations are relieved of the bottleneck of installing the holdback bar at launch time.</p>
Resumo:
<p>Terrestrial ecosystems, occupying more than 25% of the Earth's surface, can serve as</p><p>`biological valves' in regulating the anthropogenic emissions of atmospheric aerosol</p><p>particles and greenhouse gases (GHGs) as responses to their surrounding environments.</p><p>While the signicance of quantifying the exchange rates of GHGs and atmospheric</p><p>aerosol particles between the terrestrial biosphere and the atmosphere is</p><p>hardly questioned in many scientic elds, the progress in improving model predictability,</p><p>data interpretation or the combination of the two remains impeded by</p><p>the lack of precise framework elucidating their dynamic transport processes over a</p><p>wide range of spatiotemporal scales. The diculty in developing prognostic modeling</p><p>tools to quantify the source or sink strength of these atmospheric substances</p><p>can be further magnied by the fact that the climate system is also sensitive to the</p><p>feedback from terrestrial ecosystems forming the so-called `feedback cycle'. Hence,</p><p>the emergent need is to reduce uncertainties when assessing this complex and dynamic</p><p>feedback cycle that is necessary to support the decisions of mitigation and</p><p>adaptation policies associated with human activities (e.g., anthropogenic emission</p><p>controls and land use managements) under current and future climate regimes.</p><p>With the goal to improve the predictions for the biosphere-atmosphere exchange</p><p>of biologically active gases and atmospheric aerosol particles, the main focus of this</p><p>dissertation is on revising and up-scaling the biotic and abiotic transport processes</p><p>from leaf to canopy scales. The validity of previous modeling studies in determining</p><p>iv</p><p>the exchange rate of gases and particles is evaluated with detailed descriptions of their</p><p>limitations. Mechanistic-based modeling approaches along with empirical studies</p><p>across dierent scales are employed to rene the mathematical descriptions of surface</p><p>conductance responsible for gas and particle exchanges as commonly adopted by all</p><p>operational models. Specically, how variation in horizontal leaf area density within</p><p>the vegetated medium, leaf size and leaf microroughness impact the aerodynamic attributes</p><p>and thereby the ultrane particle collection eciency at the leaf/branch scale</p><p>is explored using wind tunnel experiments with interpretations by a porous media</p><p>model and a scaling analysis. A multi-layered and size-resolved second-order closure</p><p>model combined with particle </p><p>uxes and concentration measurements within and</p><p>above a forest is used to explore the particle transport processes within the canopy</p><p>sub-layer and the partitioning of particle deposition onto canopy medium and forest</p><p>oor. For gases, a modeling framework accounting for the leaf-level boundary layer</p><p>eects on the stomatal pathway for gas exchange is proposed and combined with sap</p><p>ux measurements in a wind tunnel to assess how leaf-level transpiration varies with</p><p>increasing wind speed. How exogenous environmental conditions and endogenous</p><p>soil-root-stem-leaf hydraulic and eco-physiological properties impact the above- and</p><p>below-ground water dynamics in the soil-plant system and shape plant responses</p><p>to droughts is assessed by a porous media model that accommodates the transient</p><p>water </p><p>ow within the plant vascular system and is coupled with the aforementioned</p><p>leaf-level gas exchange model and soil-root interaction model. It should be noted</p><p>that tackling all aspects of potential issues causing uncertainties in forecasting the</p><p>feedback cycle between terrestrial ecosystem and the climate is unrealistic in a single</p><p>dissertation but further research questions and opportunities based on the foundation</p><p>derived from this dissertation are also brie</p><p>y discussed.</p>
Resumo:
<p>While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown. </p><p>In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.</p><p>By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.</p><p>Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.</p>