15 resultados para Efficiency analysis
em Digital Commons at Florida International University
Resumo:
Each disaster presents itself with a unique set of characteristics that are hard to determine a priori. Thus disaster management tasks are inherently uncertain, requiring knowledge sharing and quick decision making that involves coordination across different levels and collaborators. While there has been an increasing interest among both researchers and practitioners in utilizing knowledge management to improve disaster management, little research has been reported about how to assess the dynamic nature of disaster management tasks, and what kinds of knowledge sharing are appropriate for different dimensions of task uncertainty characteristics. ^ Using combinations of qualitative and quantitative methods, this research study developed the dimensions and their corresponding measures of the uncertain dynamic characteristics of disaster management tasks and tested the relationships between the various dimensions of uncertain dynamic disaster management tasks and task performance through the moderating and mediating effects of knowledge sharing. ^ Furthermore, this research work conceptualized and assessed task uncertainty along three dimensions: novelty, unanalyzability, and significance; knowledge sharing along two dimensions: knowledge sharing purposes and knowledge sharing mechanisms; and task performance along two dimensions: task effectiveness and task efficiency. Analysis results of survey data collected from Miami-Dade County emergency managers suggested that knowledge sharing purposes and knowledge sharing mechanisms moderate and mediate uncertain dynamic disaster management task and task performance. Implications for research and practice as well directions for future research are discussed.^
Resumo:
To achieve the goal of sustainable development, the building energy system was evaluated from both the first and second law of thermodynamics point of view. The relationship between exergy destruction and sustainable development were discussed at first, followed by the description of the resource abundance model, the life cycle analysis model and the economic investment effectiveness model. By combining the forgoing models, a new sustainable index was proposed. Several green building case studies in U.S. and China were presented. The influences of building function, geographic location, climate pattern, the regional energy structure, and the technology improvement potential of renewable energy in the future were discussed. The building’s envelope, HVAC system, on-site renewable energy system life cycle analysis from energy, exergy, environmental and economic perspective were compared. It was found that climate pattern had a dramatic influence on the life cycle investment effectiveness of the building envelope. The building HVAC system energy performance was much better than its exergy performance. To further increase the exergy efficiency, renewable energy rather than fossil fuel should be used as the primary energy. A building life cycle cost and exergy consumption regression model was set up. The optimal building insulation level could be affected by either cost minimization or exergy consumption minimization approach. The exergy approach would cause better insulation than cost approach. The influence of energy price on the system selection strategy was discussed. Two photovoltaics (PV) systems—stand alone and grid tied system were compared by the life cycle assessment method. The superiority of the latter one was quite obvious. The analysis also showed that during its life span PV technology was less attractive economically because the electricity price in U.S. and China did not fully reflect the environmental burden associated with it. However if future energy price surges and PV system cost reductions were considered, the technology could be very promising for sustainable buildings in the future.
Resumo:
This study is an attempt at achieving Net Zero Energy Building (NZEB) using a solar Organic Rankine Cycle (ORC) based on exergetic and economic measures. The working fluid, working conditions of the cycle, cycle configuration, and solar collector type are considered the optimization parameters for the solar ORC system. In the first section, a procedure is developed to compare ORC working fluids based on their molecular components, temperature-entropy diagram and fluid effects on the thermal efficiency, net power generated, vapor expansion ratio, and exergy efficiency of the Rankine cycle. Fluids with the best cycle performance are recognized in two different temperature levels within two different categories of fluids: refrigerants and non-refrigerants. Important factors that could lead to irreversibility reduction of the solar ORC are also investigated in this study. In the next section, the system requirements needed to maintain the electricity demand of a geothermal air-conditioned commercial building located in Pensacola of Florida is considered as the criteria to select the optimal components and optimal working condition of the system. The solar collector loop, building, and geothermal air conditioning system are modeled using TRNSYS. Available electricity bills of the building and the 3-week monitoring data on the performance of the geothermal system are employed to calibrate the simulation. The simulation is repeated for Miami and Houston in order to evaluate the effect of the different solar radiations on the system requirements. The final section discusses the exergoeconomic analysis of the ORC system with the optimum performance. Exergoeconomics rests on the philosophy that exergy is the only rational basis for assigning monetary costs to a system’s interactions with its surroundings and to the sources of thermodynamic inefficiencies within it. Exergoeconomic analysis of the optimal ORC system shows that the ratio Rex of the annual exergy loss to the capital cost can be considered a key parameter in optimizing a solar ORC system from the thermodynamic and economic point of view. It also shows that there is a systematic correlation between the exergy loss and capital cost for the investigated solar ORC system.
Resumo:
This dissertation analyzes hospital efficiency using various econometric techniques. The first essay provides additional and recent evidence to the presence of contract management behavior in the U.S. hospital industry. Unlike previous studies, which focus on either an input-demand equation or the cost function of the firm, this paper estimates the two jointly using a system of nonlinear equations. Moreover, it addresses the longitudinal problem of institutions adopting contract management in different years, by creating a matched control group of non-adopters with the same longitudinal distribution as the group under study. The estimation procedure then finds that labor, and not capital, is the preferred input in U.S. hospitals regardless of managerial contract status. With institutions that adopt contract management benefiting from lower labor inefficiencies than the simulated non-contract adopters. These results suggest that while there is a propensity for expense preference behavior towards the labor input, contract managed firms are able to introduce efficiencies over conventional, owner controlled, firms. Using data for the years 1998 through 2007, the second essay investigates the production technology and cost efficiency faced by Florida hospitals. A stochastic frontier multiproduct cost function is estimated in order to test for economies of scale, economies of scope, and relative cost efficiencies. The results suggest that small-sized hospitals experience economies of scale, while large and medium sized institutions do not. The empirical findings show that Florida hospitals enjoy significant scope economies, regardless of size. Lastly, the evidence suggests that there is a link between hospital size and relative cost efficiency. The results of the study imply that state policy makers should be focused on increasing hospital scale for smaller institutions while facilitating the expansion of multiproduct production for larger hospitals. The third and final essay employs a two staged approach in analyzing the efficiency of hospitals in the state of Florida. In the first stage, the Banker, Charnes, and Cooper model of Data Envelopment Analysis is employed in order to derive overall technical efficiency scores for each non-specialty hospital in the state. Additionally, input slacks are calculated and reported in order to identify the factors of production that each hospital may be over utilizing. In the second stage, we employ a Tobit regression model in order to analyze the effects a number of structural, managerial, and environmental factors may have on a hospital’s efficiency. The results indicated that most non-specialty hospitals in the state are operating away from the efficient production frontier. The results also indicate that the structural make up, managerial choices, and level of competition Florida hospitals face have an impact on their overall technical efficiency.
Resumo:
Inverters play key roles in connecting sustainable energy (SE) sources to the local loads and the ac grid. Although there has been a rapid expansion in the use of renewable sources in recent years, fundamental research, on the design of inverters that are specialized for use in these systems, is still needed. Recent advances in power electronics have led to proposing new topologies and switching patterns for single-stage power conversion, which are appropriate for SE sources and energy storage devices. The current source inverter (CSI) topology, along with a newly proposed switching pattern, is capable of converting the low dc voltage to the line ac in only one stage. Simple implementation and high reliability, together with the potential advantages of higher efficiency and lower cost, turns the so-called, single-stage boost inverter (SSBI), into a viable competitor to the existing SE-based power conversion technologies.^ The dynamic model is one of the most essential requirements for performance analysis and control design of any engineering system. Thus, in order to have satisfactory operation, it is necessary to derive a dynamic model for the SSBI system. However, because of the switching behavior and nonlinear elements involved, analysis of the SSBI is a complicated task.^ This research applies the state-space averaging technique to the SSBI to develop the state-space-averaged model of the SSBI under stand-alone and grid-connected modes of operation. Then, a small-signal model is derived by means of the perturbation and linearization method. An experimental hardware set-up, including a laboratory-scaled prototype SSBI, is built and the validity of the obtained models is verified through simulation and experiments. Finally, an eigenvalue sensitivity analysis is performed to investigate the stability and dynamic behavior of the SSBI system over a typical range of operation. ^
Resumo:
Cotton is the most abundant natural fiber in the world. Many countries are involved in the growing, importation, exportation and production of this commodity. Paper documentation claiming geographic origin is the current method employed at U.S. ports for identifying cotton sources and enforcing tariffs. Because customs documentation can be easily falsified, it is necessary to develop a robust method for authenticating or refuting the source of the cotton commodities. This work presents, for the first time, a comprehensive approach to the chemical characterization of unprocessed cotton in order to provide an independent tool to establish geographic origin. Elemental and stable isotope ratio analysis of unprocessed cotton provides a means to increase the ability to distinguish cotton in addition to any physical and morphological examinations that could be, and are currently performed. Elemental analysis has been conducted using LA-ICP-MS, LA-ICP-OES and LIBS in order to offer a direct comparison of the analytical performance of each technique and determine the utility of each technique for this purpose. Multivariate predictive modeling approaches are used to determine the potential of elemental and stable isotopic information to aide in the geographic provenancing of unprocessed cotton of both domestic and foreign origin. These approaches assess the stability of the profiles to temporal and spatial variation to determine the feasibility of this application. This dissertation also evaluates plasma conditions and ablation processes so as to improve the quality of analytical measurements made using atomic emission spectroscopy techniques. These interactions, in LIBS particularly, are assessed to determine any potential simplification of the instrumental design and method development phases. This is accomplished through the analysis of several matrices representing different physical substrates to determine the potential of adopting universal LIBS parameters for 532 nm and 1064 nm LIBS for some important operating parameters. A novel approach to evaluate both ablation processes and plasma conditions using a single measurement was developed and utilized to determine the "useful ablation efficiency" for different materials. The work presented here demonstrates the potential for an a priori prediction of some probable laser parameters important in analytical LIBS measurement.
Resumo:
There are many factors which can assist in controlling the cost of labor in the food service industry. The author discusses a number of these, including scheduling, establishing production standards, forecasting workloads, analyzing employee turnover, combating absenteeism, and controlling overtime.
Resumo:
Edible oil is an important contaminant in water and wastewater. Oil droplets smaller than 40 μm may remain in effluent as an emulsion and combine with other contaminants in water. Coagulation/flocculation processes are used to remove oil droplets from water and wastewater. By adding a polymer at proper dose, small oil droplets can be flocculated and separated from water. The purpose of this study was to characterize and analyze the morphology of flocs and floc formation in edible oil-water emulsions by using microscopic image analysis techniques. The fractal dimension, concentration of polymer, effect of pH and temperature are investigated and analyzed to develop a fractal model of the flocs. Three types of edible oil (corn, olive, and sunflower oil) at concentrations of 600 ppm (by volume) were used to determine the optimum polymer dosage and effect of pH and temperature. To find the optimum polymer dose, polymer was added to the oil-water emulsions at concentration of 0.5, 1.0, 1.5, 2.0, 3.0 and 3.5 ppm (by volume). The clearest supernatants obtained from flocculation of corn, olive, and sunflower oil were achieved at polymer dosage of 3.0 ppm producing turbidities of 4.52, 12.90, and 13.10 NTU, respectively. This concentration of polymer was subsequently used to study the effect of pH and temperature on flocculation. The effect of pH was studied at pH 5, 7, 9, and 11 at 30°C. Microscopic image analysis was used to investigate the morphology of flocs in terms of fractal dimension, radius of oil droplets trapped in floc, floc size, and histograms of oil droplet distribution. Fractal dimension indicates the density of oil droplets captured in flocs. By comparison of fractal dimensions, pH was found to be one of the most important factors controlling droplet flocculation. Neutral pH or pH 7 showed the highest degree of flocculation, while acidic (pH 5) and basic pH (pH 9 and pH 11) showed low efficiency of flocculation. The fractal dimensions achieved from flocculation of corn, olive, and sunflower oil at pH 7 and temperature 30°C were 1.2763, 1.3592, and 1.4413, respectively. The effect of temperature was explored at temperatures 20°, 30°, and 40°C and pH 7. The results of flocculation of oil at pH 7 and different temperatures revealed that temperature significantly affected flocculation. The fractal dimension of flocs formed in corn, olive and sunflower oil emulsion at pH 7 and temperature 20°, 30°, and 40°C were 1.82, 1.28, 1.29, 1.62, 1.36, 1.42, 1.36, 1.44, and 1.28, respectively. After comparison of fractal dimension, radius of oil droplets captured, and floc length in each oil type, the optimal flocculation temperature was determined to be 30°C. ^
Resumo:
In this dissertation, I present an integrated model of organizational performance. Most prior research has relied extensively on testing individual linkages, often with cross-sectional data. In this dissertation, longitudinal unit-level data from 559 restaurants, collected over a one-year period, were used to test the proposed model. The model was hypothesized to begin with employee satisfaction as a key antecedent that would ultimately lead to improved financial performance. Several variables including turnover, efficiency, and guest satisfaction are proposed as mediators of the satisfaction-performance relationship. The current findings replicate and extend past research using individual-level data. The overall model adequately explained the data, but was significantly improved with an additional link from employee satisfaction to efficiency, which was not originally hypothesized. Management turnover was a strong predictor of hourly level team turnover, and both were significant predictors of efficiency. Full findings for each hypothesis are presented and practical organizational implications are given. Limitations and recommendations for future research are provided. ^
Resumo:
This dissertation introduces a new approach for assessing the effects of pediatric epilepsy on the language connectome. Two novel data-driven network construction approaches are presented. These methods rely on connecting different brain regions using either extent or intensity of language related activations as identified by independent component analysis of fMRI data. An auditory description decision task (ADDT) paradigm was used to activate the language network for 29 patients and 30 controls recruited from three major pediatric hospitals. Empirical evaluations illustrated that pediatric epilepsy can cause, or is associated with, a network efficiency reduction. Patients showed a propensity to inefficiently employ the whole brain network to perform the ADDT language task; on the contrary, controls seemed to efficiently use smaller segregated network components to achieve the same task. To explain the causes of the decreased efficiency, graph theoretical analysis was carried out. The analysis revealed no substantial global network feature differences between the patient and control groups. It also showed that for both subject groups the language network exhibited small-world characteristics; however, the patient's extent of activation network showed a tendency towards more random networks. It was also shown that the intensity of activation network displayed ipsilateral hub reorganization on the local level. The left hemispheric hubs displayed greater centrality values for patients, whereas the right hemispheric hubs displayed greater centrality values for controls. This hub hemispheric disparity was not correlated with a right atypical language laterality found in six patients. Finally it was shown that a multi-level unsupervised clustering scheme based on self-organizing maps, a type of artificial neural network, and k-means was able to fairly and blindly separate the subjects into their respective patient or control groups. The clustering was initiated using the local nodal centrality measurements only. Compared to the extent of activation network, the intensity of activation network clustering demonstrated better precision. This outcome supports the assertion that the local centrality differences presented by the intensity of activation network can be associated with focal epilepsy.^
Resumo:
Protecting confidential information from improper disclosure is a fundamental security goal. While encryption and access control are important tools for ensuring confidentiality, they cannot prevent an authorized system from leaking confidential information to its publicly observable outputs, whether inadvertently or maliciously. Hence, secure information flow aims to provide end-to-end control of information flow. Unfortunately, the traditionally-adopted policy of noninterference, which forbids all improper leakage, is often too restrictive. Theories of quantitative information flow address this issue by quantifying the amount of confidential information leaked by a system, with the goal of showing that it is intuitively "small" enough to be tolerated. Given such a theory, it is crucial to develop automated techniques for calculating the leakage in a system. ^ This dissertation is concerned with program analysis for calculating the maximum leakage, or capacity, of confidential information in the context of deterministic systems and under three proposed entropy measures of information leakage: Shannon entropy leakage, min-entropy leakage, and g-leakage. In this context, it turns out that calculating the maximum leakage of a program reduces to counting the number of possible outputs that it can produce. ^ The new approach introduced in this dissertation is to determine two-bit patterns, the relationships among pairs of bits in the output; for instance we might determine that two bits must be unequal. By counting the number of solutions to the two-bit patterns, we obtain an upper bound on the number of possible outputs. Hence, the maximum leakage can be bounded. We first describe a straightforward computation of the two-bit patterns using an automated prover. We then show a more efficient implementation that uses an implication graph to represent the two- bit patterns. It efficiently constructs the graph through the use of an automated prover, random executions, STP counterexamples, and deductive closure. The effectiveness of our techniques, both in terms of efficiency and accuracy, is shown through a number of case studies found in recent literature. ^
Resumo:
To achieve the goal of sustainable development, the building energy system was evaluated from both the first and second law of thermodynamics point of view. The relationship between exergy destruction and sustainable development were discussed at first, followed by the description of the resource abundance model, the life cycle analysis model and the economic investment effectiveness model. By combining the forgoing models, a new sustainable index was proposed. Several green building case studies in U.S. and China were presented. The influences of building function, geographic location, climate pattern, the regional energy structure, and the technology improvement potential of renewable energy in the future were discussed. The building’s envelope, HVAC system, on-site renewable energy system life cycle analysis from energy, exergy, environmental and economic perspective were compared. It was found that climate pattern had a dramatic influence on the life cycle investment effectiveness of the building envelope. The building HVAC system energy performance was much better than its exergy performance. To further increase the exergy efficiency, renewable energy rather than fossil fuel should be used as the primary energy. A building life cycle cost and exergy consumption regression model was set up. The optimal building insulation level could be affected by either cost minimization or exergy consumption minimization approach. The exergy approach would cause better insulation than cost approach. The influence of energy price on the system selection strategy was discussed. Two photovoltaics (PV) systems – stand alone and grid tied system were compared by the life cycle assessment method. The superiority of the latter one was quite obvious. The analysis also showed that during its life span PV technology was less attractive economically because the electricity price in U.S. and China did not fully reflect the environmental burden associated with it. However if future energy price surges and PV system cost reductions were considered, the technology could be very promising for sustainable buildings in the future.
Resumo:
The primary purpose of this thesis was to present a theoretical large-signal analysis to study the power gain and efficiency of a microwave power amplifier for LS-band communications using software simulation. Power gain, efficiency, reliability, and stability are important characteristics in the power amplifier design process. These characteristics affect advance wireless systems, which require low-cost device amplification without sacrificing system performance. Large-signal modeling and input and output matching components are used for this thesis. Motorola's Electro Thermal LDMOS model is a new transistor model that includes self-heating affects and is capable of small-large signal simulations. It allows for most of the design considerations to be on stability, power gain, bandwidth, and DC requirements. The matching technique allows for the gain to be maximized at a specific target frequency. Calculations and simulations for the microwave power amplifier design were performed using Matlab and Microwave Office respectively. Microwave Office is the simulation software used in this thesis. The study demonstrated that Motorola's Electro Thermal LDMOS transistor in microwave power amplifier design process is a viable solution for common-source amplifier applications in high power base stations. The MET-LDMOS met the stability requirements for the specified frequency range without a stability-improvement model. The power gain of the amplifier circuit was improved through proper microwave matching design using input/output-matching techniques. The gain and efficiency of the amplifier improve approximately 4dB and 7.27% respectively. The gain value is roughly .89 dB higher than the maximum gain specified by the MRF21010 data sheet specifications. This work can lead to efficient modeling and development of high power LDMOS transistor implementations in commercial and industry applications.
Resumo:
Cotton is the most abundant natural fiber in the world. Many countries are involved in the growing, importation, exportation and production of this commodity. Paper documentation claiming geographic origin is the current method employed at U.S. ports for identifying cotton sources and enforcing tariffs. Because customs documentation can be easily falsified, it is necessary to develop a robust method for authenticating or refuting the source of the cotton commodities. This work presents, for the first time, a comprehensive approach to the chemical characterization of unprocessed cotton in order to provide an independent tool to establish geographic origin. Elemental and stable isotope ratio analysis of unprocessed cotton provides a means to increase the ability to distinguish cotton in addition to any physical and morphological examinations that could be, and are currently performed. Elemental analysis has been conducted using LA-ICP-MS, LA-ICP-OES and LIBS in order to offer a direct comparison of the analytical performance of each technique and determine the utility of each technique for this purpose. Multivariate predictive modeling approaches are used to determine the potential of elemental and stable isotopic information to aide in the geographic provenancing of unprocessed cotton of both domestic and foreign origin. These approaches assess the stability of the profiles to temporal and spatial variation to determine the feasibility of this application. This dissertation also evaluates plasma conditions and ablation processes so as to improve the quality of analytical measurements made using atomic emission spectroscopy techniques. These interactions, in LIBS particularly, are assessed to determine any potential simplification of the instrumental design and method development phases. This is accomplished through the analysis of several matrices representing different physical substrates to determine the potential of adopting universal LIBS parameters for 532 nm and 1064 nm LIBS for some important operating parameters. A novel approach to evaluate both ablation processes and plasma conditions using a single measurement was developed and utilized to determine the “useful ablation efficiency” for different materials. The work presented here demonstrates the potential for an a priori prediction of some probable laser parameters important in analytical LIBS measurement.
Resumo:
This dissertation introduces a new approach for assessing the effects of pediatric epilepsy on the language connectome. Two novel data-driven network construction approaches are presented. These methods rely on connecting different brain regions using either extent or intensity of language related activations as identified by independent component analysis of fMRI data. An auditory description decision task (ADDT) paradigm was used to activate the language network for 29 patients and 30 controls recruited from three major pediatric hospitals. Empirical evaluations illustrated that pediatric epilepsy can cause, or is associated with, a network efficiency reduction. Patients showed a propensity to inefficiently employ the whole brain network to perform the ADDT language task; on the contrary, controls seemed to efficiently use smaller segregated network components to achieve the same task. To explain the causes of the decreased efficiency, graph theoretical analysis was carried out. The analysis revealed no substantial global network feature differences between the patient and control groups. It also showed that for both subject groups the language network exhibited small-world characteristics; however, the patient’s extent of activation network showed a tendency towards more random networks. It was also shown that the intensity of activation network displayed ipsilateral hub reorganization on the local level. The left hemispheric hubs displayed greater centrality values for patients, whereas the right hemispheric hubs displayed greater centrality values for controls. This hub hemispheric disparity was not correlated with a right atypical language laterality found in six patients. Finally it was shown that a multi-level unsupervised clustering scheme based on self-organizing maps, a type of artificial neural network, and k-means was able to fairly and blindly separate the subjects into their respective patient or control groups. The clustering was initiated using the local nodal centrality measurements only. Compared to the extent of activation network, the intensity of activation network clustering demonstrated better precision. This outcome supports the assertion that the local centrality differences presented by the intensity of activation network can be associated with focal epilepsy.