960 resultados para Dynamic increasing factor (DIF)
Resumo:
Standard economic theory suggests that capital should flow from rich countries to poor countries. However, capital has predominantly flowed to rich countries. The three essays in this dissertation attempt to explain this phenomenon. The first two essays suggest theoretical explanations for why capital has not flowed to the poor countries. The third essay empirically tests the theoretical explanations.^ The first essay examines the effects of increasing returns to scale on international lending and borrowing with moral hazard. Introducing increasing returns in a two-country general equilibrium model yields possible multiple equilibria and helps explain the possibility of capital flows from a poor to a rich country. I find that a borrowing country may need to borrow sufficient amounts internationally to reach a minimum investment threshold in order to invest domestically.^ The second essay examines how a poor country may invest in sectors with low productivity because of sovereign risk, and how collateral differences across sectors may exacerbate the problem. I model sovereign borrowing with a two-sector economy: one sector with increasing returns to scale (IRS) and one sector with diminishing returns to scale (DRS). Countries with incomes below a threshold will only invest in the DRS sector, and countries with incomes above a threshold will invest mostly in the IRS sector. The results help explain the existence of a bimodal world income distribution.^ The third essay empirically tests the explanations for why capital has not flowed from the rich to the poor countries, with a focus on institutions and initial capital. I find that institutional variables are a very important factor, but in contrast to other studies, I show that institutions do not account for the Lucas Paradox. Evidence of increasing returns still exists, even when controlling for institutions and other variables. In addition, I find that the determinants of capital flows may depend on whether a country is rich or poor.^
Resumo:
Human scent and human remains detection canines are used to locate living or deceased humans under many circumstances. Human scent canines locate individual humans on the basis of their unique scent profile, while human remains detection canines locate the general scent of decomposing human remains. Scent evidence is often collected by law enforcement agencies using a Scent Transfer Unit, a dynamic headspace concentration device. The goals of this research were to evaluate the STU-100 for the collection of human scent samples, and to apply this method to the collection of living and deceased human samples, and to the creation of canine training aids. The airflow rate and collection material used with the STU-100 were evaluated using a novel scent delivery method. Controlled Odor Mimic Permeation Systems were created containing representative standard compounds delivered at known rates, improving the reproducibility of optimization experiments. Flow rates and collection materials were compared. Higher air flow rates usually yielded significantly less total volatile compounds due to compound breakthrough through the collection material. Collection from polymer and cellulose-based materials demonstrated that the molecular backbone of the material is a factor in the trapping and releasing of compounds. The weave of the material also affects compound collection, as those materials with a tighter weave demonstrated enhanced collection efficiencies. Using the optimized method, volatiles were efficiently collected from living and deceased humans. Replicates of the living human samples showed good reproducibility; however, the odor profiles from individuals were not always distinguishable from one another. Analysis of the human remains samples revealed similarity in the type and ratio of compounds. Two types of prototype training aids were developed utilizing combinations of pure compounds as well as volatiles from actual human samples concentrated onto sorbents, which were subsequently used in field tests. The pseudo scent aids had moderate success in field tests, and the Odor pad aids had significant success. This research demonstrates that the STU-100 is a valuable tool for dog handlers and as a field instrument; however, modifications are warranted in order to improve its performance as a method for instrumental detection.
Resumo:
The convergence of data, audio and video on IP networks is changing the way individuals, groups and organizations communicate. This diversity of communication media presents opportunities for creating synergistic collaborative communications. This form of collaborative communication is however not without its challenges. The increasing number of communication service providers coupled with a combinatorial mix of offered services, varying Quality-of-Service and oscillating pricing of services increases the complexity for the user to manage and maintain ‘always best’ priced or performance services. Consumers have to manually manage and adapt their communication in line with differences in services across devices, networks and media while ensuring that the usage remain consistent with their intended goals. This dissertation proposes a novel user-centric approach to address this problem. The proposed approach aims to reduce the aforementioned complexity to the user by (1) providing high-level abstractions and a policy based methodology for automated selection of the communication services guided by high-level user policies and (2) providing services through the seamless integration of multiple communication service providers and providing an extensible framework to support the integration of multiple communication service providers. The approach was implemented in the Communication Virtual Machine (CVM), a model-driven technology for realizing communication applications. The CVM includes the Network Communication Broker, the layer responsible for providing a network-independent API to the upper layers of CVM. The initial prototype for the NCB supported only a single communication framework which limited the number, quality and types of services available. Experimental evaluation of the approach show the additional overhead of the approach is minimal compared to the individual communication services frameworks. Additionally the automated approach proposed out performed the individual communication services frameworks for cross framework switching.
Resumo:
Each disaster presents itself with a unique set of characteristics that are hard to determine a priori. Thus disaster management tasks are inherently uncertain, requiring knowledge sharing and quick decision making that involves coordination across different levels and collaborators. While there has been an increasing interest among both researchers and practitioners in utilizing knowledge management to improve disaster management, little research has been reported about how to assess the dynamic nature of disaster management tasks, and what kinds of knowledge sharing are appropriate for different dimensions of task uncertainty characteristics. ^ Using combinations of qualitative and quantitative methods, this research study developed the dimensions and their corresponding measures of the uncertain dynamic characteristics of disaster management tasks and tested the relationships between the various dimensions of uncertain dynamic disaster management tasks and task performance through the moderating and mediating effects of knowledge sharing. ^ Furthermore, this research work conceptualized and assessed task uncertainty along three dimensions: novelty, unanalyzability, and significance; knowledge sharing along two dimensions: knowledge sharing purposes and knowledge sharing mechanisms; and task performance along two dimensions: task effectiveness and task efficiency. Analysis results of survey data collected from Miami-Dade County emergency managers suggested that knowledge sharing purposes and knowledge sharing mechanisms moderate and mediate uncertain dynamic disaster management task and task performance. Implications for research and practice as well directions for future research are discussed.^
Resumo:
With the exponential increasing demands and uses of GIS data visualization system, such as urban planning, environment and climate change monitoring, weather simulation, hydrographic gauge and so forth, the geospatial vector and raster data visualization research, application and technology has become prevalent. However, we observe that current web GIS techniques are merely suitable for static vector and raster data where no dynamic overlaying layers. While it is desirable to enable visual explorations of large-scale dynamic vector and raster geospatial data in a web environment, improving the performance between backend datasets and the vector and raster applications remains a challenging technical issue. This dissertation is to implement these challenging and unimplemented areas: how to provide a large-scale dynamic vector and raster data visualization service with dynamic overlaying layers accessible from various client devices through a standard web browser, and how to make the large-scale dynamic vector and raster data visualization service as rapid as the static one. To accomplish these, a large-scale dynamic vector and raster data visualization geographic information system based on parallel map tiling and a comprehensive performance improvement solution are proposed, designed and implemented. They include: the quadtree-based indexing and parallel map tiling, the Legend String, the vector data visualization with dynamic layers overlaying, the vector data time series visualization, the algorithm of vector data rendering, the algorithm of raster data re-projection, the algorithm for elimination of superfluous level of detail, the algorithm for vector data gridding and re-grouping and the cluster servers side vector and raster data caching.
Resumo:
Fueled by increasing human appetite for high computing performance, semiconductor technology has now marched into the deep sub-micron era. As transistor size keeps shrinking, more and more transistors are integrated into a single chip. This has increased tremendously the power consumption and heat generation of IC chips. The rapidly growing heat dissipation greatly increases the packaging/cooling costs, and adversely affects the performance and reliability of a computing system. In addition, it also reduces the processor's life span and may even crash the entire computing system. Therefore, dynamic thermal management (DTM) is becoming a critical problem in modern computer system design. Extensive theoretical research has been conducted to study the DTM problem. However, most of them are based on theoretically idealized assumptions or simplified models. While these models and assumptions help to greatly simplify a complex problem and make it theoretically manageable, practical computer systems and applications must deal with many practical factors and details beyond these models or assumptions. The goal of our research was to develop a test platform that can be used to validate theoretical results on DTM under well-controlled conditions, to identify the limitations of existing theoretical results, and also to develop new and practical DTM techniques. This dissertation details the background and our research efforts in this endeavor. Specifically, in our research, we first developed a customized test platform based on an Intel desktop. We then tested a number of related theoretical works and examined their limitations under the practical hardware environment. With these limitations in mind, we developed a new reactive thermal management algorithm for single-core computing systems to optimize the throughput under a peak temperature constraint. We further extended our research to a multicore platform and developed an effective proactive DTM technique for throughput maximization on multicore processor based on task migration and dynamic voltage frequency scaling technique. The significance of our research lies in the fact that our research complements the current extensive theoretical research in dealing with increasingly critical thermal problems and enabling the continuous evolution of high performance computing systems.
Resumo:
A major goal of the Comprehensive Everglades Restoration Plan (CERP) is to recover historical (pre-drainage) wading bird rookeries and reverse marked decreases in wading bird nesting success in Everglades National Park. To assess efforts to restore wading birds, a trophic hypothesis was developed that proposes seasonal concentrations of small-fish and crustaceans (i.e., wading bird prey) were a key factor to historical wading bird success. Drainage of the Everglades has diminished these seasonal concentrations, leading to a decline in wading bird nesting and displacing them from their historical nesting locations. The trophic hypothesis predicts that restoring historical hydrological patterns to pre-drainage conditions will recover the timing and location of seasonally concentrated prey, ultimately restoring wading bird nesting and foraging to the southern Everglades. We identified a set of indicators using small-fish and crustaceans that can be predicted from hydrological targets and used to assess management success in regaining suitable wading bird foraging habitat. Small-fish and crustaceans are key components of the Everglades food web and are sensitive to hydrological management, track hydrological history with little time lag, and can be studied at the landscape scale. The seasonal hydrological variation of the Everglades that creates prey concentrations presents a challenge to interpreting monitoring data. To account for the variable hydrology of the Everglades in our assessment, we developed dynamic hydrological targets that respond to changes in prevailing regional rainfall. We also derived statistical relationships between density and hydrological drivers for species representing four different life-history responses to drought. Finally, we use these statistical relationships and hydrological targets to set restoration targets for prey density. We also describe a report-card methodology to communicate the results of model-based assessments for communication to a broad audience.
Resumo:
This research sought to determine the implications of a non-traded differentiated commodity produced with increasing returns to scale, for the welfare of countries that allowed free international migration. We developed two- and three-country Ricardian models in which labor was the only factor of production. The countries traded freely in homogeneous goods produced with constant returns to scale. Each also had a non-traded differentiated good sector where production took place using increasing returns to scale technology. Then we allowed for free international migration between two of the countries and observed what happened to welfare in both countries as indicated by their per capita utilities in the new equilibrium relative to their pre-migration utilities. ^ Preferences of consumers were represented by a two-tier utility function [Dixit and Stiglitz 1977]. As migration took place it impacted utility in two ways. The expanding country enjoyed the positive effect of increased product diversity in the non-traded good sector. However, it also suffered adverse terms-of-trade as its production cost declined. The converse was true for the contracting country. To determine the net impact on welfare we derived indirect per capita utility functions of the countries algebraically and graphically. Then we juxtaposed the graphs of the utility functions to obtain possible general equilibria. These we used to observe the welfare outcomes. ^ We found that the most likely outcomes were either that both countries gained, or one country lost while the other gained. We were, however, able to generate cases where both countries lost as a result of allowing free inter-country migration. This was most likely to happen when the shares of income spent on each country's export good differed significantly. In the three country world when we allowed two of the countries to engage in preferential trading arrangements while imposing a prohibitive tariff on imports from the third country welfare of the partner countries declined. When inter-union migration was permitted welfare declined even further. This we showed was due to the presence of the non-traded good sector. ^
Resumo:
Multiple physiological systems regulate the electric communication signal of the weakly electric gymnotiform fish, Brachyhypopomuspinnicaudatus. Fish were injected with neuroendocrine probes which identified pharmacologically relevant serotonin (5-HT) receptors similar to the mammalian 5-HT1AR and 5-HT2AR. Peptide hormones of the hypothalamic-pituitary-adrenal/interrenal axis also augment the electric waveform. These results indicate that the central serotonergic system interacts with the hypothalamic-pituitaryinterrenal system to regulate communication signals in this species. The same neuroendocrine probes were tested in females before and after introducing androgens to examine the relationship between sex steroid hormones, the serotonergic system, melanocortin peptides, and EOD modulations. Androgens caused an increase in female B. pinnicaudatus responsiveness to other pharmacological challenges, particularly to the melanocortin peptide adrenocorticotropic hormone (ACTH). A forced social challenge paradigm was administered to determine if androgens are responsible for controlling the signal modulations these fish exhibit when they encounter conspecifics. Males and females responded similarly to this social challenge construct, however introducing androgens caused implanted females to produce more exaggerated responses. These results confirm that androgens enhance an individual's capacity to produce an exaggerated response to challenge, however another unidentified factor appears to regulate sex-specific behaviors in this species. These results suggest that the rapid electric waveform modulations B. pinnicaudatus produces in response to conspecifics are situation-specific and controlled by activation of different serotonin receptor types and the subsequent effect on release of pituitary hormones.
Resumo:
This research explores Bayesian updating as a tool for estimating parameters probabilistically by dynamic analysis of data sequences. Two distinct Bayesian updating methodologies are assessed. The first approach focuses on Bayesian updating of failure rates for primary events in fault trees. A Poisson Exponentially Moving Average (PEWMA) model is implemnented to carry out Bayesian updating of failure rates for individual primary events in the fault tree. To provide a basis for testing of the PEWMA model, a fault tree is developed based on the Texas City Refinery incident which occurred in 2005. A qualitative fault tree analysis is then carried out to obtain a logical expression for the top event. A dynamic Fault Tree analysis is carried out by evaluating the top event probability at each Bayesian updating step by Monte Carlo sampling from posterior failure rate distributions. It is demonstrated that PEWMA modeling is advantageous over conventional conjugate Poisson-Gamma updating techniques when failure data is collected over long time spans. The second approach focuses on Bayesian updating of parameters in non-linear forward models. Specifically, the technique is applied to the hydrocarbon material balance equation. In order to test the accuracy of the implemented Bayesian updating models, a synthetic data set is developed using the Eclipse reservoir simulator. Both structured grid and MCMC sampling based solution techniques are implemented and are shown to model the synthetic data set with good accuracy. Furthermore, a graphical analysis shows that the implemented MCMC model displays good convergence properties. A case study demonstrates that Likelihood variance affects the rate at which the posterior assimilates information from the measured data sequence. Error in the measured data significantly affects the accuracy of the posterior parameter distributions. Increasing the likelihood variance mitigates random measurement errors, but casuses the overall variance of the posterior to increase. Bayesian updating is shown to be advantageous over deterministic regression techniques as it allows for incorporation of prior belief and full modeling uncertainty over the parameter ranges. As such, the Bayesian approach to estimation of parameters in the material balance equation shows utility for incorporation into reservoir engineering workflows.
Resumo:
The rise of the twenty-first century has seen the further increase in the industrialization of Earth’s resources, as society aims to meet the needs of a growing population while still protecting our environmental and natural resources. The advent of the industrial bioeconomy – which encompasses the production of renewable biological resources and their conversion into food, feed, and bio-based products – is seen as an important step in transition towards sustainable development and away from fossil fuels. One sector of the industrial bioeconomy which is rapidly being expanded is the use of biobased feedstocks in electricity production as an alternative to coal, especially in the European Union.
As bioeconomy policies and objectives increasingly appear on political agendas, there is a growing need to quantify the impacts of transitioning from fossil fuel-based feedstocks to renewable biological feedstocks. Specifically, there is a growing need to conduct a systems analysis and potential risks of increasing the industrial bioeconomy, given that the flows within it are inextricably linked. Furthermore, greater analysis is needed into the consequences of shifting from fossil fuels to renewable feedstocks, in part through the use of life cycle assessment modeling to analyze impacts along the entire value chain.
To assess the emerging nature of the industrial bioeconomy, three objectives are addressed: (1) quantify the global industrial bioeconomy, linking the use of primary resources with the ultimate end product; (2) quantify the impacts of the expaning wood pellet energy export market of the Southeastern United States; (3) conduct a comparative life cycle assessment, incorporating the use of dynamic life cycle assessment, of replacing coal-fired electricity generation in the United Kingdom with wood pellets that are produced in the Southeastern United States.
To quantify the emergent industrial bioeconomy, an empirical analysis was undertaken. Existing databases from multiple domestic and international agencies was aggregated and analyzed in Microsoft Excel to produce a harmonized dataset of the bioeconomy. First-person interviews, existing academic literature, and industry reports were then utilized to delineate the various intermediate and end use flows within the bioeconomy. The results indicate that within a decade, the industrial use of agriculture has risen ten percent, given increases in the production of bioenergy and bioproducts. The underlying resources supporting the emergent bioeconomy (i.e., land, water, and fertilizer use) were also quantified and included in the database.
Following the quantification of the existing bioeconomy, an in-depth analysis of the bioenergy sector was conducted. Specifically, the focus was on quantifying the impacts of the emergent wood pellet export sector that has rapidly developed in recent years in the Southeastern United States. A cradle-to-gate life cycle assessment was conducted in order to quantify supply chain impacts from two wood pellet production scenarios: roundwood and sawmill residues. For reach of the nine impact categories assessed, wood pellet production from sawmill residues resulted in higher values, ranging from 10-31% higher.
The analysis of the wood pellet sector was then expanded to include the full life cycle (i.e., cradle-to-grave). In doing to, the combustion of biogenic carbon and the subsequent timing of emissions were assessed by incorporating dynamic life cycle assessment modeling. Assuming immediate carbon neutrality of the biomass, the results indicated an 86% reduction in global warming potential when utilizing wood pellets as compared to coal for electricity production in the United Kingdom. When incorporating the timing of emissions, wood pellets equated to a 75% or 96% reduction in carbon dioxide emissions, depending upon whether the forestry feedstock was considered to be harvested or planted in year one, respectively.
Finally, a policy analysis of renewable energy in the United States was conducted. Existing coal-fired power plants in the Southeastern United States were assessed in terms of incorporating the co-firing of wood pellets. Co-firing wood pellets with coal in existing Southeastern United States power stations would result in a nine percent reduction in global warming potential.
Resumo:
Urinary tract infections (UTIs) are typically caused by bacteria that colonize different regions of the urinary tract, mainly the bladder and the kidney. Approximately 25% of women that suffer from UTIs experience a recurrent infection within 6 months of the initial bout, making UTIs a serious economic burden resulting in more than 10 million hospital visits and $3.5 billion in healthcare costs in the United States alone. Type-1 fimbriated Uropathogenic E. coli (UPEC) is the major causative agent of UTIs, accounting for almost 90 % of bacterial UTIs. The unique ability of UPEC to bind and invade the superficial bladder epithelium allows the bacteria to persist inside epithelial niches and survive antibiotic treatment. Persistent, intracellular UPEC are retained in the bladder epithelium for long periods, making them a source of recurrent UTIs. Hence, the ability of UPEC to persist in the bladder is a matter of major health and economic concern, making studies exploring the underlying mechanism of UPEC persistence highly relevant.
In my thesis, I will describe how intracellular Uropathogenic E.coli (UPEC) evade host defense mechanisms in the superficial bladder epithelium. I will also describe some of the unique traits of persistent UPEC and explore strategies to induce their clearance from the bladder. I have discovered that the UPEC virulence factor Alpha-hemolysin (HlyA) plays a key role in the survival and persistence of UPEC in the superficial bladder epithelium. In-vitro and in-vivo studies comparing intracellular survival of wild type (WT) and hemolysin deficient UPEC suggested that HlyA is vital for UPEC persistence in the superficial bladder epithelium. Further in-vitro studies revealed that hemolysin helped UPEC persist intracellularly by evading the bacterial expulsion actions of the bladder cells and remarkably, this virulence factor also helped bacteria avoid t degradation in lysosomes.
To elucidate the mechanistic basis for how hemolysin promotes UPEC persistence in the urothelium, we initially focused on how hemolysin facilitates the evasion of UPEC expulsion from bladder cells. We found that upon entry, UPEC were encased in “exocytic vesicles” but as a result of HlyA expression these bacteria escaped these vesicles and entered the cytosol. Consequently, these bacteria were able to avoid expulsion by the cellular export machinery.
Since bacteria found in the cytosol of host cells are typically recognized by the cellular autophagy pathway and transported to the lysosomes where they are degraded, we explored why this was not the case here. We observed that although cytosolic HlyA expressing UPEC were recognized and encased by the autophagy system and transported to lysosomes, the bacteria appeared to avoid degradation in these normally degradative compartments. A closer examination of the bacteria containing lysosomes revealed that they lacked V-ATPase. V-ATPase is a well-known proton pump essential for the acidification of mammalian intracellular degradative compartments, allowing for the proper functioning of degradative proteases. The absence of V-ATPase appeared to be due to hemolysin mediated alteration of the bladder cell F-actin network. From these studies, it is clear that UPEC hemolysin facilitates UPEC persistence in the superficial bladder epithelium by helping bacteria avoid expulsion by the exocytic machinery of the cell and at the same time enabling the bacteria avoid degradation when the bacteria are shuttled into the lysosomes.
Interestingly even though UPEC appear to avoid elimination from the bladder cell their ability to multiple in bladder cells seem limited.. Indeed, our in-vitro and in-vivo experiments reveal that UPEC survive in superficial bladder epithelium for extended periods of time without a significantly change in CFU numbers. Indeed, we observed these bacteria appeared quiescent in nature. This observation was supported by the observation that UPEC genetically unable to enter a quiescence phase exhibited limited ability to persist in bladder cells in vitro and in vivo, in the mouse bladder.
The studies elucidated in this thesis reveal how UPEC toxin, Alpha-hemolysin plays a significant role in promoting UPEC persistence via the modulation of the vesicular compartmentalization of UPEC at two different stages of the infection in the superficial bladder epithelium. These results highlight the importance of UPEC Alpha-hemolysin as an essential determinant of UPEC persistence in the urinary bladder.
Resumo:
This study examined the effect of a spanwise angle of attack gradient on the growth and stability of a dynamic stall vortex in a rotating system. It was found that a spanwise angle of attack gradient induces a corresponding spanwise vorticity gradient, which, in combination with spanwise flow, results in a redistribution of circulation along the blade. Specifically, when modelling the angle of attack gradient experienced by a wind turbine at the 30% span position during a gust event, the spanwise vorticity gradient was aligned such that circulation was transported from areas of high circulation to areas of low circulation, increasing the local dynamic stall vortex growth rate, which corresponds to an increase in the lift coefficient, and a decrease in the local vortex stability at this point. Reversing the relative alignment of the spanwise vorticity gradient and spanwise flow results in circulation transport from areas of low circulation generation to areas of high circulation generation, acting to reduce local circulation and stabilise the vortex. This circulation redistribution behaviour describes a mechanism by which the fluctuating loads on a wind turbine are magnified, which is detrimental to turbine lifetime and performance. Therefore, an understanding of this phenomenon has the potential to facilitate optimised wind turbine design.
Resumo:
This thesis investigates the design of optimal tax systems in dynamic environments. The first essay characterizes the optimal tax system where wages depend on stochastic shocks and work experience. In addition to redistributive and efficiency motives, the taxation of inexperienced workers depends on a second-best requirement that encourages work experience, a social insurance motive and incentive effects. Calibrations using U.S. data yield higher expected optimal marginal income tax rates for experienced workers for most of the inexperienced workers. They confirm that the average marginal income tax rate increases (decreases) with age when shocks and work experience are substitutes (complements). Finally, more variability in experienced workers' earnings prospects leads to increasing tax rates since income taxation acts as a social insurance mechanism. In the second essay, the properties of an optimal tax system are investigated in a dynamic private information economy where labor market frictions create unemployment that destroys workers' human capital. A two-skill type model is considered where wages and employment are endogenous. I find that the optimal tax system distorts the first-period wages of all workers below their efficient levels which leads to more employment. The standard no-distortion-at-the-top result no longer holds due to the combination of private information and the destruction of human capital. I show this result analytically under the Maximin social welfare function and confirm it numerically for a general social welfare function. I also investigate the use of a training program and job creation subsidies. The final essay analyzes the optimal linear tax system when there is a population of individuals whose perceptions of savings are linked to their disposable income and their family background through family cultural transmission. Aside from the standard equity/efficiency trade-off, taxes account for the endogeneity of perceptions through two channels. First, taxing labor decreases income, which decreases the perception of savings through time. Second, taxation on savings corrects for the misperceptions of workers and thus savings and labor decisions. Numerical simulations confirm that behavioral issues push labor income taxes upward to finance saving subsidies. Government transfers to individuals are also decreased to finance those same subsidies.