15 resultados para two-layer fluid
em Digital Commons at Florida International University
Resumo:
A methodology for formally modeling and analyzing software architecture of mobile agent systems provides a solid basis to develop high quality mobile agent systems, and the methodology is helpful to study other distributed and concurrent systems as well. However, it is a challenge to provide the methodology because of the agent mobility in mobile agent systems.^ The methodology was defined from two essential parts of software architecture: a formalism to define the architectural models and an analysis method to formally verify system properties. The formalism is two-layer Predicate/Transition (PrT) nets extended with dynamic channels, and the analysis method is a hierarchical approach to verify models on different levels. The two-layer modeling formalism smoothly transforms physical models of mobile agent systems into their architectural models. Dynamic channels facilitate the synchronous communication between nets, and they naturally capture the dynamic architecture configuration and agent mobility of mobile agent systems. Component properties are verified based on transformed individual components, system properties are checked in a simplified system model, and interaction properties are analyzed on models composing from involved nets. Based on the formalism and the analysis method, this researcher formally modeled and analyzed a software architecture of mobile agent systems, and designed an architectural model of a medical information processing system based on mobile agents. The model checking tool SPIN was used to verify system properties such as reachability, concurrency and safety of the medical information processing system. ^ From successful modeling and analyzing the software architecture of mobile agent systems, the conclusion is that PrT nets extended with channels are a powerful tool to model mobile agent systems, and the hierarchical analysis method provides a rigorous foundation for the modeling tool. The hierarchical analysis method not only reduces the complexity of the analysis, but also expands the application scope of model checking techniques. The results of formally modeling and analyzing the software architecture of the medical information processing system show that model checking is an effective and an efficient way to verify software architecture. Moreover, this system shows a high level of flexibility, efficiency and low cost of mobile agent technologies. ^
A framework for transforming, analyzing, and realizing software designs in unified modeling language
Resumo:
Unified Modeling Language (UML) is the most comprehensive and widely accepted object-oriented modeling language due to its multi-paradigm modeling capabilities and easy to use graphical notations, with strong international organizational support and industrial production quality tool support. However, there is a lack of precise definition of the semantics of individual UML notations as well as the relationships among multiple UML models, which often introduces incomplete and inconsistent problems for software designs in UML, especially for complex systems. Furthermore, there is a lack of methodologies to ensure a correct implementation from a given UML design. The purpose of this investigation is to verify and validate software designs in UML, and to provide dependability assurance for the realization of a UML design.^ In my research, an approach is proposed to transform UML diagrams into a semantic domain, which is a formal component-based framework. The framework I proposed consists of components and interactions through message passing, which are modeled by two-layer algebraic high-level nets and transformation rules respectively. In the transformation approach, class diagrams, state machine diagrams and activity diagrams are transformed into component models, and transformation rules are extracted from interaction diagrams. By applying transformation rules to component models, a (sub)system model of one or more scenarios can be constructed. Various techniques such as model checking, Petri net analysis techniques can be adopted to check if UML designs are complete or consistent. A new component called property parser was developed and merged into the tool SAM Parser, which realize (sub)system models automatically. The property parser generates and weaves runtime monitoring code into system implementations automatically for dependability assurance. The framework in the investigation is creative and flexible since it not only can be explored to verify and validate UML designs, but also provides an approach to build models for various scenarios. As a result of my research, several kinds of previous ignored behavioral inconsistencies can be detected.^
Resumo:
The discovery of High-Temperature Superconductors (HTSCs) has spurred the need for the fabrication of superconducting electronic devices able to match the performance of today's semiconductor devices. While there are several HTSCs in use today, YBaCuO7-x (YBCO) is the better characterized and more widely used material for small electronic applications. This thesis explores the fabrication of a Two-Terminal device with a superconductor and a painted on electrode as the terminals and a ferroelectric, BaTiO 3 (BTO), in between. The methods used to construct such a device and the challenges faced with the fabrication of a viable device will be examined. The ferroelectric layer of the devices that proved adequate for use were poled by the application of an electric field. Temperature Bias Poling used an applied field of 105V/cm at a temperature of approximately 135*C. High Potential Poling used an applied field of 106V/cm at room temperature (20*C). The devices were then tested for a change in their superconducting critical temperature, Tc. A shift of 1-2K in the Tc(onset) of YBCO was observed for Temperature Bias Poling and a shift of 2-6K for High Potential Poling. These are the first reported results of the field effect using BTO on YBCO. The mechanism involved in the shifting of Tc will be discussed along with possible applications.
Resumo:
We studied the development of leaf characters in two Southeast Asian dipterocarp forest trees under different photosynthetic photon flux densities (PFD) and spectral qualities (red to far-red, R:FR). The two species, Hopea helferi and H. odorata, are taxonomically closely related but differ in their ecological requirements; H. helferi is more drought tolerant and H. odorata more shade tolerant. Seedlings were grown in replicated shadehouse treatments of differing PFD and R:FR. We measured or calculated (1) leaf and tissue thicknesses; (2) mesophyll parenchyma, air space, and lignified tissue volumes; (3) mesophyll air volumes (Vmes/Asurf) and surfaces (Ames/Asurf); (4) palisade cell length and width; (5) chlorophyll/cm2 and a/ b; (6) leaf absorption; and (7) attenuance/absorbance at 652 and 550 nm. These characters varied in response to light conditions in both taxa. Characters were predominantly affected by PFD, and R:FR slightly influenced many characters. Leaf characters of H. odorata were more plastic in response to treatment conditions. Characters were correlated with each other in a complex fashion. Variation in leaf anatomy is most likely a consequence of increasing leaf thickness in both taxa, which may increase mechanical strength and defense against herbivory in more exposed environments. Variation in leaf optical properties was most likely affected by pigment photo-bleaching in treatments of more intense PFD and was not correlated with Amax. The greater plasticity of leaf responses in H. odorata helps explain the acclimation over the range of light conditions encountered by this shade-tolerant taxon. The dense layer of scales on the leaf undersurface and other anatomical characters in H. helferi reduced gas exchange and growth in this drought-tolerant tree.
Resumo:
Series Micro-Electro-Mechanical System (MEMS) switches based on superconductor are utilized to switch between two bandpass hairpin filters with bandwidths of 365 MHz and nominal center frequencies of 2.1 GHz and 2.6 GHz. This was accomplished with 4 switches actuated in pairs, one pair at a time. When one pair was actuated the first bandpass filter was coupled to the input and output ports. When the other pair was actuated the second bandpass filter was coupled to the input and output ports. The device is made of a YBa2Cu 3O7 thin film deposited on a 20 mm x 20 mm LaAlO3 substrate by pulsed laser deposition. BaTiO3 deposited by RF magnetron sputtering in utilized as the insulation layer at the switching points of contact. These results obtained assured great performance showing a switchable device at 68 V with temperature of 40 K for the 2.1 GHz filter and 75 V with temperature of 30 K for the 2.6 GHz hairpin filter. ^
Resumo:
In recent years, the internet has grown exponentially, and become more complex. This increased complexity potentially introduces more network-level instability. But for any end-to-end internet connection, maintaining the connection's throughput and reliability at a certain level is very important. This is because it can directly affect the connection's normal operation. Therefore, a challenging research task is to improve a network's connection performance by optimizing its throughput and reliability. This dissertation proposed an efficient and reliable transport layer protocol (called concurrent TCP (cTCP)), an extension of the current TCP protocol, to optimize end-to-end connection throughput and enhance end-to-end connection fault tolerance. The proposed cTCP protocol could aggregate multiple paths' bandwidth by supporting concurrent data transfer (CDT) on a single connection. Here concurrent data transfer was defined as the concurrent transfer of data from local hosts to foreign hosts via two or more end-to-end paths. An RTT-Based CDT mechanism, which was based on a path's RTT (Round Trip Time) to optimize CDT performance, was developed for the proposed cTCP protocol. This mechanism primarily included an RTT-Based load distribution and path management scheme, which was used to optimize connections' throughput and reliability. A congestion control and retransmission policy based on RTT was also provided. According to experiment results, under different network conditions, our RTT-Based CDT mechanism could acquire good CDT performance. Finally a CWND-Based CDT mechanism, which was based on a path's CWND (Congestion Window), to optimize CDT performance was introduced. This mechanism primarily included: a CWND-Based load allocation scheme, which assigned corresponding data to paths based on their CWND to achieve aggregate bandwidth; a CWND-Based path management, which was used to optimize connections' fault tolerance; and a congestion control and retransmission management policy, which was similar to regular TCP in its separate path handling. According to corresponding experiment results, this mechanism could acquire near-optimal CDT performance under different network conditions.
Resumo:
The main objective of this work is to develop a quasi three-dimensional numerical model to simulate stony debris flows, considering a continuum fluid phase, composed by water and fine sediments, and a non-continuum phase including large particles, such as pebbles and boulders. Large particles are treated in a Lagrangian frame of reference using the Discrete Element Method, the fluid phase is based on the Eulerian approach, using the Finite Element Method to solve the depth-averaged Navier-Stokes equations in two horizontal dimensions. The particle’s equations of motion are in three dimensions. The model simulates particle-particle collisions and wall-particle collisions, taking into account that particles are immersed in a fluid. Bingham and Cross rheological models are used for the continuum phase. Both formulations provide very stable results, even in the range of very low shear rates. Bingham formulation is better able to simulate the stopping stage of the fluid when applied shear stresses are low. Results of numerical simulations have been compared with data from laboratory experiments on a flume-fan prototype. Results show that the model is capable of simulating the motion of big particles moving in the fluid flow, handling dense particulate flows and avoiding overlap among particles. An application to simulate debris flow events that occurred in Northern Venezuela in 1999 shows that the model could replicate the main boulder accumulation areas that were surveyed by the USGS. Uniqueness of this research is the integration of mud flow and stony debris movement in a single modeling tool that can be used for planning and management of debris flow prone areas.
Resumo:
A novel modeling approach is applied to karst hydrology. Long-standing problems in karst hydrology and solute transport are addressed using Lattice Boltzmann methods (LBMs). These methods contrast with other modeling approaches that have been applied to karst hydrology. The motivation of this dissertation is to develop new computational models for solving ground water hydraulics and transport problems in karst aquifers, which are widespread around the globe. This research tests the viability of the LBM as a robust alternative numerical technique for solving large-scale hydrological problems. The LB models applied in this research are briefly reviewed and there is a discussion of implementation issues. The dissertation focuses on testing the LB models. The LBM is tested for two different types of inlet boundary conditions for solute transport in finite and effectively semi-infinite domains. The LBM solutions are verified against analytical solutions. Zero-diffusion transport and Taylor dispersion in slits are also simulated and compared against analytical solutions. These results demonstrate the LBM’s flexibility as a solute transport solver. The LBM is applied to simulate solute transport and fluid flow in porous media traversed by larger conduits. A LBM-based macroscopic flow solver (Darcy’s law-based) is linked with an anisotropic dispersion solver. Spatial breakthrough curves in one and two dimensions are fitted against the available analytical solutions. This provides a steady flow model with capabilities routinely found in ground water flow and transport models (e.g., the combination of MODFLOW and MT3D). However the new LBM-based model retains the ability to solve inertial flows that are characteristic of karst aquifer conduits. Transient flows in a confined aquifer are solved using two different LBM approaches. The analogy between Fick’s second law (diffusion equation) and the transient ground water flow equation is used to solve the transient head distribution. An altered-velocity flow solver with source/sink term is applied to simulate a drawdown curve. Hydraulic parameters like transmissivity and storage coefficient are linked with LB parameters. These capabilities complete the LBM’s effective treatment of the types of processes that are simulated by standard ground water models. The LB model is verified against field data for drawdown in a confined aquifer.
Resumo:
This research sought to understand the role that differentially assessed lands (lands in the United States given tax breaks in return for their guarantee to remain in agriculture) play in influencing urban growth. Our method was to calibrate the SLEUTH urban growth model under two different conditions. The first used an excluded layer that ignored such lands, effectively rendering them available for development. The second treated those lands as totally excluded from development. Our hypothesis was that excluding those lands would yield better metrics of fit with past data. Our results validate our hypothesis since two different metrics that evaluate goodness of fit both yielded higher values when differentially assessed lands are treated as excluded. This suggests that, at least in our study area, differential assessment, which protects farm and ranch lands for tenuous periods of time, has indeed allowed farmland to resist urban development. Including differentially assessed lands also yielded very different calibrated coefficients of growth as the model tried to account for the same growth patterns over two very different excluded areas. Excluded layer design can greatly affect model behavior. Since differentially assessed lands are quite common through the United States and are often ignored in urban growth modeling, the findings of this research can assist other urban growth modelers in designing excluded layers that result in more accurate model calibration and thus forecasting.
Resumo:
The goal of this investigation was to examine how sediment accretion and organic carbon (OC) burial rates in mangrove forests respond to climate change. Specifically, will the accretion rates keep pace with sea-level rise, and what is the source and fate of OC in the system? Mass accumulation, accretion and OC burial rates were determined via 210Pb dating (i.e. 100 year time scale) on sediment cores collected from two mangrove forest sites within Everglades National Park, Florida (USA). Enhanced mass accumulation, accretion and OC burial rates were found in an upper layer that corresponded to a well-documented storm surge deposit. Accretion rates were 5.9 and 6.5 mm yr− 1 within the storm deposit compared to overall rates of 2.5 and 3.6 mm yr− 1. These rates were found to be matching or exceeding average sea-level rise reported for Key West, Florida. Organic carbon burial rates were 260 and 393 g m− 2 yr− 1 within the storm deposit compared to 151 and 168 g m− 2 yr− 1 overall burial rates. The overall rates are similar to global estimates for OC burial in marine wetlands. With tropical storms being a frequent occurrence in this region the resulting storm surge deposits are an important mechanism for maintaining both overall accretion and OC burial rates. Enhanced OC burial rates within the storm deposit could be due to an increase in productivity created from higher concentrations of phosphorus within storm-delivered sediments and/or from the deposition of allochthonous OC. Climate change-amplified storms and sea-level rise could damage mangrove forests, exposing previously buried OC to oxidation and contribute to increasing atmospheric CO2 concentrations. However, the processes described here provide a mechanism whereby oxidation of OC would be limited and the overall OC reservoir maintained within the mangrove forest sediments.
Resumo:
Compact thermal-fluid systems are found in many industries from aerospace to microelectronics where a combination of small size, light weight, and high surface area to volume ratio fluid networks are necessary. These devices are typically designed with fluid networks consisting of many small parallel channels that effectively pack a large amount of heat transfer surface area in a very small volume but do so at the cost of increased pumping power requirements. ^ To offset this cost the use of a branching fluid network for the distribution of coolant within a heat sink is investigated. The goal of the branch design technique is to minimize the entropy generation associated with the combination of viscous dissipation and convection heat transfer experienced by the coolant in the heat sink while maintaining compact high heat transfer surface area to volume ratios. ^ The derivation of Murray's Law, originally developed to predict the geometry of physiological transport systems, is extended to heat sink designs which minimze entropy generation. Two heat sink designs at different scales are built, and tested experimentally and analytically. The first uses this new derivation of Murray's Law. The second uses a combination of Murray's Law and Constructal Theory. The results of the experiments were used to verify the analytical and numerical models. These models were then used to compare the performance of the heat sink with other compact high performance heat sink designs. The results showed that the techniques used to design branching fluid networks significantly improves the performance of active heat sinks. The design experience gained was then used to develop a set of geometric relations which optimize the heat transfer to pumping power ratio of a single cooling channel element. Each element can be connected together using a set of derived geometric guidelines which govern branch diameters and angles. The methodology can be used to design branching fluid networks which can fit any geometry. ^
Resumo:
The goal of this investigation was to examine how sediment accretion and organic carbon (OC) burial rates in mangrove forests respond to climate change. Specifically, will the accretion rates keep pace with sea-level rise, and what is the source and fate of OC in the system? Mass accumulation, accretion and OC burial rates were determined via 210Pb dating (i.e. 100 year time scale) on sediment cores collected from two mangrove forest sites within Everglades National Park, Florida (USA). Enhanced mass accumulation, accretion and OC burial rates were found in an upper layer that corresponded to a well-documented storm surge deposit. Accretion rates were 5.9 and 6.5 mm yr− 1 within the storm deposit compared to overall rates of 2.5 and 3.6 mm yr− 1. These rates were found to be matching or exceeding average sea-level rise reported for Key West, Florida. Organic carbon burial rates were 260 and 393 g m− 2 yr− 1 within the storm deposit compared to 151 and 168 g m− 2 yr− 1 overall burial rates. The overall rates are similar to global estimates for OC burial in marine wetlands. With tropical storms being a frequent occurrence in this region the resulting storm surge deposits are an important mechanism for maintaining both overall accretion and OC burial rates. Enhanced OC burial rates within the storm deposit could be due to an increase in productivity created from higher concentrations of phosphorus within storm-delivered sediments and/or from the deposition of allochthonous OC. Climate change-amplified storms and sea-level rise could damage mangrove forests, exposing previously buried OC to oxidation and contribute to increasing atmospheric CO2 concentrations. However, the processes described here provide a mechanism whereby oxidation of OC would be limited and the overall OC reservoir maintained within the mangrove forest sediments.
Resumo:
We have modified a technique which uses a single pair of primer sets directed against homologous but distinct genes on the X and Y chromosomes, all of which are coamplified in the same reaction tube with trace amounts of radioactivity. The resulting bands are equal in length, yet distinguishable by restriction enzyme sites generating two independent bands, a 364 bp X-specific band and a 280 bp Y-specific band. A standard curve was generated to show the linear relationship between X/Y ratio average vs. %Y or %X chromosomal content. Of the 51 purified amniocyte DNA samples analyzed, 16 samples showed evidence of high % X contamination while 2 samples demonstrated higher % Y than the expected 50% X and 50% Y chromosomal content. With regards to the 25 processed sperm samples analyzed, X-sperm enrichment was evident when compared to the primary sex ratio whereas Y-sperm was enriched when we compared before and after selection samples.
Resumo:
A novel modeling approach is applied to karst hydrology. Long-standing problems in karst hydrology and solute transport are addressed using Lattice Boltzmann methods (LBMs). These methods contrast with other modeling approaches that have been applied to karst hydrology. The motivation of this dissertation is to develop new computational models for solving ground water hydraulics and transport problems in karst aquifers, which are widespread around the globe. This research tests the viability of the LBM as a robust alternative numerical technique for solving large-scale hydrological problems. The LB models applied in this research are briefly reviewed and there is a discussion of implementation issues. The dissertation focuses on testing the LB models. The LBM is tested for two different types of inlet boundary conditions for solute transport in finite and effectively semi-infinite domains. The LBM solutions are verified against analytical solutions. Zero-diffusion transport and Taylor dispersion in slits are also simulated and compared against analytical solutions. These results demonstrate the LBM’s flexibility as a solute transport solver. The LBM is applied to simulate solute transport and fluid flow in porous media traversed by larger conduits. A LBM-based macroscopic flow solver (Darcy’s law-based) is linked with an anisotropic dispersion solver. Spatial breakthrough curves in one and two dimensions are fitted against the available analytical solutions. This provides a steady flow model with capabilities routinely found in ground water flow and transport models (e.g., the combination of MODFLOW and MT3D). However the new LBM-based model retains the ability to solve inertial flows that are characteristic of karst aquifer conduits. Transient flows in a confined aquifer are solved using two different LBM approaches. The analogy between Fick’s second law (diffusion equation) and the transient ground water flow equation is used to solve the transient head distribution. An altered-velocity flow solver with source/sink term is applied to simulate a drawdown curve. Hydraulic parameters like transmissivity and storage coefficient are linked with LB parameters. These capabilities complete the LBM’s effective treatment of the types of processes that are simulated by standard ground water models. The LB model is verified against field data for drawdown in a confined aquifer.
Resumo:
The main objective of this work is to develop a quasi three-dimensional numerical model to simulate stony debris flows, considering a continuum fluid phase, composed by water and fine sediments, and a non-continuum phase including large particles, such as pebbles and boulders. Large particles are treated in a Lagrangian frame of reference using the Discrete Element Method, the fluid phase is based on the Eulerian approach, using the Finite Element Method to solve the depth-averaged Navier–Stokes equations in two horizontal dimensions. The particle’s equations of motion are in three dimensions. The model simulates particle-particle collisions and wall-particle collisions, taking into account that particles are immersed in a fluid. Bingham and Cross rheological models are used for the continuum phase. Both formulations provide very stable results, even in the range of very low shear rates. Bingham formulation is better able to simulate the stopping stage of the fluid when applied shear stresses are low. Results of numerical simulations have been compared with data from laboratory experiments on a flume-fan prototype. Results show that the model is capable of simulating the motion of big particles moving in the fluid flow, handling dense particulate flows and avoiding overlap among particles. An application to simulate debris flow events that occurred in Northern Venezuela in 1999 shows that the model could replicate the main boulder accumulation areas that were surveyed by the USGS. Uniqueness of this research is the integration of mud flow and stony debris movement in a single modeling tool that can be used for planning and management of debris flow prone areas.