32 resultados para Empirical Flow Models
Resumo:
This dissertation studies newly founded U.S. firms' survival using three different releases of the Kauffman Firm Survey. I study firms' survival from a different perspective in each chapter. ^ The first essay studies firms' survival through an analysis of their initial state at startup and the current state of the firms as they gain maturity. The probability of survival is determined using three probit models, using both firm-specific variables and an industry scale variable to control for the environment of operation. The firm's specific variables include size, experience and leverage as a debt-to-value ratio. The results indicate that size and relevant experience are both positive predictors for the initial and current states. Debt appears to be a predictor of exit if not justified wisely by acquiring assets. As suggested previously in the literature, entering a smaller-scale industry is a positive predictor of survival from birth. Finally, a smaller-scale industry diminishes the negative effects of debt. ^ The second essay makes use of a hazard model to confirm that new service-providing (SP) firms are more likely to survive than new product providers (PPs). I investigate the possible explanations for the higher survival rate of SPs using a Cox proportional hazard model. I examine six hypotheses (variations in capital per worker, expenses per worker, owners' experience, industry wages, assets and size), none of which appear to explain why SPs are more likely than PPs to survive. Two other possibilities are discussed: tax evasion and human/social relations, but these could not be tested due to lack of data. ^ The third essay investigates women-owned firms' higher failure rates using a Cox proportional hazard on two models. I make use of a never-before used variable that proxies for owners' confidence. This variable represents the owners' self-evaluated competitive advantage. ^ The first empirical model allows me to compare women's and men's hazard rates for each variable. In the second model I successively add the variables that could potentially explain why women have a higher failure rate. Unfortunately, I am not able to fully explain the gender effect on the firms' survival. Nonetheless, the second empirical approach allows me to confirm that social and psychological differences among genders are important in explaining the higher likelihood to fail in women-owned firms.^
Resumo:
Taylor Slough is one of the natural freshwater contributors to Florida Bay through a network of microtidal creeks crossing the Everglades Mangrove Ecotone Region (EMER). The EMER ecological function is critical since it mediates freshwater and nutrient inputs and controls the water quality in Eastern Florida Bay. Furthermore, this region is vulnerable to changing hydrodynamics and nutrient loadings as a result of upstream freshwater management practices proposed by the Comprehensive Everglades Restoration Program (CERP), currently the largest wetland restoration project in the USA. Despite the hydrological importance of Taylor Slough in the water budget of Florida Bay, there are no fine scale (∼1 km2) hydrodynamic models of this system that can be utilized as a tool to evaluate potential changes in water flow, salinity, and water quality. Taylor River is one of the major creeks draining Taylor Slough freshwater into Florida Bay. We performed a water budget analysis for the Taylor River area, based on long-term hydrologic data (1999–2007) and supplemented by hydrodynamic modeling using a MIKE FLOOD (DHI,http://dhigroup.com/) model to evaluate groundwater and overland water discharges. The seasonal hydrologic characteristics are very distinctive (average Taylor River wet vs. dry season outflow was 6 to 1 during 1999–2006) with a pronounced interannual variability of flow. The water budget shows a net dominance of through flow in the tidal mixing zone, while local precipitation and evapotranspiration play only a secondary role, at least in the wet season. During the dry season, the tidal flood reaches the upstream boundary of the study area during approximately 80 days per year on average. The groundwater field measurements indicate a mostly upwards-oriented leakage, which possibly equals the evapotranspiration term. The model results suggest a high importance of groundwater contribution to the water salinity in the EMER. The model performance is satisfactory during the dry season where surface flow in the area is confined to the Taylor River channel. The model also provided guidance on the importance of capturing the overland flow component, which enters the area as sheet flow during the rainy season. Overall, the modeling approach is suitable to reach better understanding of the water budget in the mangrove region. However, more detailed field data is needed to ascertain model predictions by further calibrating overland flow parameters.
Resumo:
In 2010, the American Association of State Highway and Transportation Officials (AASHTO) released a safety analysis software system known as SafetyAnalyst. SafetyAnalyst implements the empirical Bayes (EB) method, which requires the use of Safety Performance Functions (SPFs). The system is equipped with a set of national default SPFs, and the software calibrates the default SPFs to represent the agency's safety performance. However, it is recommended that agencies generate agency-specific SPFs whenever possible. Many investigators support the view that the agency-specific SPFs represent the agency data better than the national default SPFs calibrated to agency data. Furthermore, it is believed that the crash trends in Florida are different from the states whose data were used to develop the national default SPFs. In this dissertation, Florida-specific SPFs were developed using the 2008 Roadway Characteristics Inventory (RCI) data and crash and traffic data from 2007-2010 for both total and fatal and injury (FI) crashes. The data were randomly divided into two sets, one for calibration (70% of the data) and another for validation (30% of the data). The negative binomial (NB) model was used to develop the Florida-specific SPFs for each of the subtypes of roadway segments, intersections and ramps, using the calibration data. Statistical goodness-of-fit tests were performed on the calibrated models, which were then validated using the validation data set. The results were compared in order to assess the transferability of the Florida-specific SPF models. The default SafetyAnalyst SPFs were calibrated to Florida data by adjusting the national default SPFs with local calibration factors. The performance of the Florida-specific SPFs and SafetyAnalyst default SPFs calibrated to Florida data were then compared using a number of methods, including visual plots and statistical goodness-of-fit tests. The plots of SPFs against the observed crash data were used to compare the prediction performance of the two models. Three goodness-of-fit tests, represented by the mean absolute deviance (MAD), the mean square prediction error (MSPE), and Freeman-Tukey R2 (R2FT), were also used for comparison in order to identify the better-fitting model. The results showed that Florida-specific SPFs yielded better prediction performance than the national default SPFs calibrated to Florida data. The performance of Florida-specific SPFs was further compared with that of the full SPFs, which include both traffic and geometric variables, in two major applications of SPFs, i.e., crash prediction and identification of high crash locations. The results showed that both SPF models yielded very similar performance in both applications. These empirical results support the use of the flow-only SPF models adopted in SafetyAnalyst, which require much less effort to develop compared to full SPFs.
Resumo:
This research study was designed to examine the relationship between globalization as measured by the KOF index, its related forces (economic, political, cultural and technological) and the public provision of higher education. This study is important since globalization is increasingly being associated with changes in critical aspects of higher education. The public provision of education was measured by government expenditure and educational outcomes; that is participation, gender equity and attainment. The study utilized a non-experimental quantitative research design. Data collected from secondary sources for 139 selected countries was analyzed. The countries were geographically distributed and included both developed and developing countries. The choice of countries for inclusion in the study was based on data availability. The data, which was sourced from international organizations such as the United Nations and the World Bank, were examined for different time periods using five year averages. The period covered was 1970 to 2009.^ The relationship between globalization and the higher education variables was examined using cross sectional regression analysis while controlling for economic, political and demographic factors. The major findings of the study are as follows. For the two spending models, only one revealed a significant relationship between globalization and education with the R 2 s ranging from .222 to .448 over the period. This relationship was however negative indicating that as globalization increased, spending on higher education declined. However, for the education outcomes models, this relationship was not significant. For the sub-indices of globalization, only the political dimension showed significance as shown in the spending model. Political globalization was significant for six periods with R2 s ranging from .31 to .52.^ The study concluded that the results are mixed for both the spending and outcome models. It also found no robust effects of globalization on government education provision. This finding is not surprising given the existing literature which sees mixed results on the social impact of globalization.^
Resumo:
Secrecy is fundamental to computer security, but real systems often cannot avoid leaking some secret information. For this reason, the past decade has seen growing interest in quantitative theories of information flow that allow us to quantify the information being leaked. Within these theories, the system is modeled as an information-theoretic channel that specifies the probability of each output, given each input. Given a prior distribution on those inputs, entropy-like measures quantify the amount of information leakage caused by the channel. ^ This thesis presents new results in the theory of min-entropy leakage. First, we study the perspective of secrecy as a resource that is gradually consumed by a system. We explore this intuition through various models of min-entropy consumption. Next, we consider several composition operators that allow smaller systems to be combined into larger systems, and explore the extent to which the leakage of a combined system is constrained by the leakage of its constituents. Most significantly, we prove upper bounds on the leakage of a cascade of two channels, where the output of the first channel is used as input to the second. In addition, we show how to decompose a channel into a cascade of channels. ^ We also establish fundamental new results about the recently-proposed g-leakage family of measures. These results further highlight the significance of channel cascading. We prove that whenever channel A is composition refined by channel B, that is, whenever A is the cascade of B and R for some channel R, the leakage of A never exceeds that of B, regardless of the prior distribution or leakage measure (Shannon leakage, guessing entropy leakage, min-entropy leakage, or g-leakage). Moreover, we show that composition refinement is a partial order if we quotient away channel structure that is redundant with respect to leakage alone. These results are strengthened by the proof that composition refinement is the only way for one channel to never leak more than another with respect to g-leakage. Therefore, composition refinement robustly answers the question of when a channel is always at least as secure as another from a leakage point of view.^
Resumo:
Software engineering researchers are challenged to provide increasingly more powerful levels of abstractions to address the rising complexity inherent in software solutions. One new development paradigm that places models as abstraction at the forefront of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code.^ Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process.^ The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources.^ At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM's synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise.^ This dissertation investigates how to decouple the DSK from the MoE and subsequently producing a generic model of execution (GMoE) from the remaining application logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis component of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions.^ This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.^
Resumo:
This dissertation addresses how the cultural dimensions of individualism and collectivism affect the attributions people make for unethical behavior at work. The moderating effect of ethnicity is also examined by considering two culturally diverse groups: Hispanics and Anglos. The sample for this study is a group of business graduate students from two universities in the Southeast. A 20-minute survey was distributed to master's degree students at their classroom and later on returned to the researcher. Individualism and collectivism were operationalized as by a set of attitude items, while unethical work behavior was introduced in the form of hypothetical descriptions or scenarios. Data analysis employed multiple group confirmatory factor analysis for both independent and dependent variables, and subsequently multiple group LISREL models, in order to test predictions. Results confirmed the expected link between cultural variables and attribution responses, although the role of independent variables shifted, due to the moderating effect of ethnicity, and to the nuances of each particular situation.
Resumo:
Variable Speed Limit (VSL) strategies identify and disseminate dynamic speed limits that are determined to be appropriate based on prevailing traffic conditions, road surface conditions, and weather conditions. This dissertation develops and evaluates a shockwave-based VSL system that uses a heuristic switching logic-based controller with specified thresholds of prevailing traffic flow conditions. The system aims to improve operations and mobility at critical bottlenecks. Before traffic breakdown occurrence, the proposed VSL’s goal is to prevent or postpone breakdown by decreasing the inflow and achieving uniform distribution in speed and flow. After breakdown occurrence, the VSL system aims to dampen traffic congestion by reducing the inflow traffic to the congested area and increasing the bottleneck capacity by deactivating the VSL at the head of the congested area. The shockwave-based VSL system pushes the VSL location upstream as the congested area propagates upstream. In addition to testing the system using infrastructure detector-based data, this dissertation investigates the use of Connected Vehicle trajectory data as input to the shockwave-based VSL system performance. Since the field Connected Vehicle data are not available, as part of this research, Vehicle-to-Infrastructure communication is modeled in the microscopic simulation to obtain individual vehicle trajectories. In this system, wavelet transform is used to analyze aggregated individual vehicles’ speed data to determine the locations of congestion. The currently recommended calibration procedures of simulation models are generally based on the capacity, volume and system-performance values and do not specifically examine traffic breakdown characteristics. However, since the proposed VSL strategies are countermeasures to the impacts of breakdown conditions, considering breakdown characteristics in the calibration procedure is important to have a reliable assessment. Several enhancements were proposed in this study to account for the breakdown characteristics at bottleneck locations in the calibration process. In this dissertation, performance of shockwave-based VSL is compared to VSL systems with different fixed VSL message sign locations utilizing the calibrated microscopic model. The results show that shockwave-based VSL outperforms fixed-location VSL systems, and it can considerably decrease the maximum back of queue and duration of breakdown while increasing the average speed during breakdown.
Resumo:
Hurricanes are one of the deadliest and costliest natural hazards affecting the Gulf coast and Atlantic coast areas of the United States. An effective way to minimize hurricane damage is to strengthen structures and buildings. The investigation of surface level hurricane wind behavior and the resultant wind loads on structures is aimed at providing structural engineers with information on hurricane wind characteristics required for the design of safe structures. Information on mean wind profiles, gust factors, turbulence intensity, integral scale, and turbulence spectra and co-spectra is essential for developing realistic models of wind pressure and wind loads on structures. The research performed for this study was motivated by the fact that considerably fewer data and validated models are available for tropical than for extratropical storms. Using the surface wind measurements collected by the Florida Coastal Monitoring Program (FCMP) during hurricane passages over coastal areas, this study presents comparisons of surface roughness length estimates obtained by using several estimation methods, and estimates of the mean wind and turbulence structure of hurricane winds over coastal areas under neutral stratification conditions. In addition, a program has been developed and tested to systematically analyze Wall of Wind (WoW) data, that will make it possible to perform analyses of baseline characteristics of flow obtained in the WoW. This program can be used in future research to compare WoW data with FCMP data, as gust and turbulence generator systems and other flow management devices will be used to create WoW flows that match as closely as possible real hurricane wind conditions. Hurricanes are defined as tropical cyclones for which the maximum 1-minute sustained surface wind speeds exceed 74 mph. FCMP data include data for tropical cyclones with lower sustained speeds. However, for the winds analyzed in this study the speeds were sufficiently high to assure that neutral stratification prevailed. This assures that the characteristics of those winds are similar to those prevailing in hurricanes. For this reason in this study the terms tropical cyclones and hurricanes are used interchangeably.
Resumo:
A novel modeling approach is applied to karst hydrology. Long-standing problems in karst hydrology and solute transport are addressed using Lattice Boltzmann methods (LBMs). These methods contrast with other modeling approaches that have been applied to karst hydrology. The motivation of this dissertation is to develop new computational models for solving ground water hydraulics and transport problems in karst aquifers, which are widespread around the globe. This research tests the viability of the LBM as a robust alternative numerical technique for solving large-scale hydrological problems. The LB models applied in this research are briefly reviewed and there is a discussion of implementation issues. The dissertation focuses on testing the LB models. The LBM is tested for two different types of inlet boundary conditions for solute transport in finite and effectively semi-infinite domains. The LBM solutions are verified against analytical solutions. Zero-diffusion transport and Taylor dispersion in slits are also simulated and compared against analytical solutions. These results demonstrate the LBM’s flexibility as a solute transport solver. The LBM is applied to simulate solute transport and fluid flow in porous media traversed by larger conduits. A LBM-based macroscopic flow solver (Darcy’s law-based) is linked with an anisotropic dispersion solver. Spatial breakthrough curves in one and two dimensions are fitted against the available analytical solutions. This provides a steady flow model with capabilities routinely found in ground water flow and transport models (e.g., the combination of MODFLOW and MT3D). However the new LBM-based model retains the ability to solve inertial flows that are characteristic of karst aquifer conduits. Transient flows in a confined aquifer are solved using two different LBM approaches. The analogy between Fick’s second law (diffusion equation) and the transient ground water flow equation is used to solve the transient head distribution. An altered-velocity flow solver with source/sink term is applied to simulate a drawdown curve. Hydraulic parameters like transmissivity and storage coefficient are linked with LB parameters. These capabilities complete the LBM’s effective treatment of the types of processes that are simulated by standard ground water models. The LB model is verified against field data for drawdown in a confined aquifer.
Resumo:
This dissertation focused on developing an integrated surface – subsurface hydrologic simulation numerical model by programming and testing the coupling of the USGS MODFLOW-2005 Groundwater Flow Process (GWF) package (USGS, 2005) with the 2D surface water routing model: FLO-2D (O’Brien et al., 1993). The coupling included the necessary procedures to numerically integrate and verify both models as a single computational software system that will heretofore be referred to as WHIMFLO-2D (Wetlands Hydrology Integrated Model). An improved physical formulation of flow resistance through vegetation in shallow waters based on the concept of drag force was also implemented for the simulations of floodplains, while the use of the classical methods (e.g., Manning, Chezy, Darcy-Weisbach) to calculate flow resistance has been maintained for the canals and deeper waters. A preliminary demonstration exercise WHIMFLO-2D in an existing field site was developed for the Loxahatchee Impoundment Landscape Assessment (LILA), an 80 acre area, located at the Arthur R. Marshall Loxahatchee National Wild Life Refuge in Boynton Beach, Florida. After applying a number of simplifying assumptions, results have illustrated the ability of the model to simulate the hydrology of a wetland. In this illustrative case, a comparison between measured and simulated stages level showed an average error of 0.31% with a maximum error of 2.8%. Comparison of measured and simulated groundwater head levels showed an average error of 0.18% with a maximum of 2.9%.
Resumo:
The main objective of this work is to develop a quasi three-dimensional numerical model to simulate stony debris flows, considering a continuum fluid phase, composed by water and fine sediments, and a non-continuum phase including large particles, such as pebbles and boulders. Large particles are treated in a Lagrangian frame of reference using the Discrete Element Method, the fluid phase is based on the Eulerian approach, using the Finite Element Method to solve the depth-averaged Navier–Stokes equations in two horizontal dimensions. The particle’s equations of motion are in three dimensions. The model simulates particle-particle collisions and wall-particle collisions, taking into account that particles are immersed in a fluid. Bingham and Cross rheological models are used for the continuum phase. Both formulations provide very stable results, even in the range of very low shear rates. Bingham formulation is better able to simulate the stopping stage of the fluid when applied shear stresses are low. Results of numerical simulations have been compared with data from laboratory experiments on a flume-fan prototype. Results show that the model is capable of simulating the motion of big particles moving in the fluid flow, handling dense particulate flows and avoiding overlap among particles. An application to simulate debris flow events that occurred in Northern Venezuela in 1999 shows that the model could replicate the main boulder accumulation areas that were surveyed by the USGS. Uniqueness of this research is the integration of mud flow and stony debris movement in a single modeling tool that can be used for planning and management of debris flow prone areas.
Resumo:
Increasing dependence on groundwater in the Wakal River basin, India, jeopardizes water supply sustainability. A numerical groundwater model was developed to better understand the aquifer system and to evaluate its potential in terms of quantity and replenishment. Potential artificial recharge areas were delineated using landscape and hydrogeologic parameters, Geographic Information System (GIS), and remote sensing. Groundwater models are powerful tools for recharge estimation when transmissivity is known. Proper recharge must be applied to reproduce field-measured heads. The model showed that groundwater levels could decline significantly if there are two drought years in every four years that result in reduced recharge, and groundwater withdrawal is increased by 15%. The effect of such drought is currently uncertain however, because runoff from the basin is unknown. Remote sensing and GIS revealed areas with slopes less than 5%, forest cover, and Normalized Difference Vegetative Index greater than 0.5 that are suitable recharge sites.
Resumo:
Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.
Resumo:
The objective of this study is to identify the optimal designs of converging-diverging supersonic and hypersonic nozzles that perform at maximum uniformity of thermodynamic and flow-field properties with respect to their average values at the nozzle exit. Since this is a multi-objective design optimization problem, the design variables used are parameters defining the shape of the nozzle. This work presents how variation of such parameters can influence the nozzle exit flow non-uniformities. A Computational Fluid Dynamics (CFD) software package, ANSYS FLUENT, was used to simulate the compressible, viscous gas flow-field in forty nozzle shapes, including the heat transfer analysis. The results of two turbulence models, k-e and k-ω, were computed and compared. With the analysis results obtained, the Response Surface Methodology (RSM) was applied for the purpose of performing a multi-objective optimization. The optimization was performed with ModeFrontier software package using Kriging and Radial Basis Functions (RBF) response surfaces. Final Pareto optimal nozzle shapes were then analyzed with ANSYS FLUENT to confirm the accuracy of the optimization process.