944 resultados para Almost Optimal Density Function


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Supply chain operations directly affect service levels. Decision on amendment of facilities is generally decided based on overall cost, leaving out the efficiency of each unit. Decomposing the supply chain superstructure, efficiency analysis of the facilities (warehouses or distribution centers) that serve customers can be easily implemented. With the proposed algorithm, the selection of a facility is based on service level maximization and not just cost minimization as this analysis filters all the feasible solutions utilizing Data Envelopment Analysis (DEA) technique. Through multiple iterations, solutions are filtered via DEA and only the efficient ones are selected leading to cost minimization. In this work, the problem of optimal supply chain networks design is addressed based on a DEA based algorithm. A Branch and Efficiency (B&E) algorithm is deployed for the solution of this problem. Based on this DEA approach, each solution (potentially installed warehouse, plant etc) is treated as a Decision Making Unit, thus is characterized by inputs and outputs. The algorithm through additional constraints named “efficiency cuts”, selects only efficient solutions providing better objective function values. The applicability of the proposed algorithm is demonstrated through illustrative examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dynamical evolution of dislocations in plastically deformed metals is controlled by both deterministic factors arising out of applied loads and stochastic effects appearing due to fluctuations of internal stress. Such type of stochastic dislocation processes and the associated spatially inhomogeneous modes lead to randomness in the observed deformation structure. Previous studies have analyzed the role of randomness in such textural evolution but none of these models have considered the impact of a finite decay time (all previous models assumed instantaneous relaxation which is "unphysical") of the stochastic perturbations in the overall dynamics of the system. The present article bridges this knowledge gap by introducing a colored noise in the form of an Ornstein-Uhlenbeck noise in the analysis of a class of linear and nonlinear Wiener and Ornstein-Uhlenbeck processes that these structural dislocation dynamics could be mapped on to. Based on an analysis of the relevant Fokker-Planck model, our results show that linear Wiener processes remain unaffected by the second time scale in the problem but all nonlinear processes, both Wiener type and Ornstein-Uhlenbeck type, scale as a function of the noise decay time τ. The results are expected to ramify existing experimental observations and inspire new numerical and laboratory tests to gain further insight into the competition between deterministic and random effects in modeling plastically deformed samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with the monolithic decoupled XYZ compliant parallel mechanisms (CPMs) for multi-function applications, which can be fabricated monolithically without assembly and has the capability of kinetostatic decoupling. At first, the conceptual design of monolithic decoupled XYZ CPMs is presented using identical spatial compliant multi-beam modules based on a decoupled 3-PPPR parallel kinematic mechanism. Three types of applications: motion/positioning stages, force/acceleration sensors and energy harvesting devices are described in principle. The kinetostatic and dynamic modelling is then conducted to capture the displacements of any stage under loads acting at any stage and the natural frequency with the comparisons with FEA results. Finally, performance characteristics analysis for motion stage applications is detailed investigated to show how the change of the geometrical parameter can affect the performance characteristics, which provides initial optimal estimations. Results show that the smaller thickness of beams and larger dimension of cubic stages can improve the performance characteristics excluding natural frequency under allowable conditions. In order to improve the natural frequency characteristic, a stiffness-enhanced monolithic decoupled configuration that is achieved through employing more beams in the spatial modules or reducing the mass of each cubic stage mass can be adopted. In addition, an isotropic variation with different motion range along each axis and same payload in each leg is proposed. The redundant design for monolithic fabrication is introduced in this paper, which can overcome the drawback of monolithic fabrication that the failed compliant beam is difficult to replace, and extend the CPM’s life.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The unprecedented and relentless growth in the electronics industry is feeding the demand for integrated circuits (ICs) with increasing functionality and performance at minimum cost and power consumption. As predicted by Moore's law, ICs are being aggressively scaled to meet this demand. While the continuous scaling of process technology is reducing gate delays, the performance of ICs is being increasingly dominated by interconnect delays. In an effort to improve submicrometer interconnect performance, to increase packing density, and to reduce chip area and power consumption, the semiconductor industry is focusing on three-dimensional (3D) integration. However, volume production and commercial exploitation of 3D integration are not feasible yet due to significant technical hurdles.

At the present time, interposer-based 2.5D integration is emerging as a precursor to stacked 3D integration. All the dies and the interposer in a 2.5D IC must be adequately tested for product qualification. However, since the structure of 2.5D ICs is different from the traditional 2D ICs, new challenges have emerged: (1) pre-bond interposer testing, (2) lack of test access, (3) limited ability for at-speed testing, (4) high density I/O ports and interconnects, (5) reduced number of test pins, and (6) high power consumption. This research targets the above challenges and effective solutions have been developed to test both dies and the interposer.

The dissertation first introduces the basic concepts of 3D ICs and 2.5D ICs. Prior work on testing of 2.5D ICs is studied. An efficient method is presented to locate defects in a passive interposer before stacking. The proposed test architecture uses e-fuses that can be programmed to connect or disconnect functional paths inside the interposer. The concept of a die footprint is utilized for interconnect testing, and the overall assembly and test flow is described. Moreover, the concept of weighted critical area is defined and utilized to reduce test time. In order to fully determine the location of each e-fuse and the order of functional interconnects in a test path, we also present a test-path design algorithm. The proposed algorithm can generate all test paths for interconnect testing.

In order to test for opens, shorts, and interconnect delay defects in the interposer, a test architecture is proposed that is fully compatible with the IEEE 1149.1 standard and relies on an enhancement of the standard test access port (TAP) controller. To reduce test cost, a test-path design and scheduling technique is also presented that minimizes a composite cost function based on test time and the design-for-test (DfT) overhead in terms of additional through silicon vias (TSVs) and micro-bumps needed for test access. The locations of the dies on the interposer are taken into consideration in order to determine the order of dies in a test path.

To address the scenario of high density of I/O ports and interconnects, an efficient built-in self-test (BIST) technique is presented that targets the dies and the interposer interconnects. The proposed BIST architecture can be enabled by the standard TAP controller in the IEEE 1149.1 standard. The area overhead introduced by this BIST architecture is negligible; it includes two simple BIST controllers, a linear-feedback-shift-register (LFSR), a multiple-input-signature-register (MISR), and some extensions to the boundary-scan cells in the dies on the interposer. With these extensions, all boundary-scan cells can be used for self-configuration and self-diagnosis during interconnect testing. To reduce the overall test cost, a test scheduling and optimization technique under power constraints is described.

In order to accomplish testing with a small number test pins, the dissertation presents two efficient ExTest scheduling strategies that implements interconnect testing between tiles inside an system on chip (SoC) die on the interposer while satisfying the practical constraint that the number of required test pins cannot exceed the number of available pins at the chip level. The tiles in the SoC are divided into groups based on the manner in which they are interconnected. In order to minimize the test time, two optimization solutions are introduced. The first solution minimizes the number of input test pins, and the second solution minimizes the number output test pins. In addition, two subgroup configuration methods are further proposed to generate subgroups inside each test group.

Finally, the dissertation presents a programmable method for shift-clock stagger assignment to reduce power supply noise during SoC die testing in 2.5D ICs. An SoC die in the 2.5D IC is typically composed of several blocks and two neighboring blocks that share the same power rails should not be toggled at the same time during shift. Therefore, the proposed programmable method does not assign the same stagger value to neighboring blocks. The positions of all blocks are first analyzed and the shared boundary length between blocks is then calculated. Based on the position relationships between the blocks, a mathematical model is presented to derive optimal result for small-to-medium sized problems. For larger designs, a heuristic algorithm is proposed and evaluated.

In summary, the dissertation targets important design and optimization problems related to testing of interposer-based 2.5D ICs. The proposed research has led to theoretical insights, experiment results, and a set of test and design-for-test methods to make testing effective and feasible from a cost perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

B cells mediate immune responses via the secretion of antibody and interactions with other immune cell populations through antigen presentation, costimulation, and cytokine secretion. Although B cells are primarily believed to promote immune responses using the mechanisms described above, some unique regulatory B cell populations that negatively influence inflammation have also been described. Among these is a rare interleukin (IL)-10-producing B lymphocyte subset termed “B10 cells.” B cell-derived IL-10 can inhibit various arms of the immune system, including polarization of Th1/Th2 cell subsets, antigen presentation and cytokine production by monocytes and macrophages, and activation of regulatory T cells. Further studies in numerous autoimmune and inflammatory models of disease have confirmed the ability of B10 cells to negatively regulate inflammation in an IL-10-dependent manner. Although IL-10 is indispensable to the effector functions of B10 cells, how this specialized B cell population is selected in vivo to produce IL-10 is unknown. Some studies have demonstrated a link between B cell receptor (BCR)-derived signals and the acquisition of IL-10 competence. Additionally, whether antigen-BCR interactions are required for B cell IL-10 production during homeostasis as well as active immune responses is a matter of debate. Therefore, the goal of this thesis is to determine the importance of antigen-driven signals during B10 cell development in vivo and during B10 cell-mediated immunosuppression.

Chapter 3 of the dissertation explored the BCR repertoire of spleen and peritoneal cavity B10 cells using single-cell sequencing to lay the foundation for studies to understand the full range of antigens that may be involved in B10 cell selection. In both the spleen and peritoneal cavity B10 cells studied, BCR gene utilization was diverse, and the expressed BCR transcripts were largely unmutated. Thus, B10 cells are likely capable of responding to a wide range of foreign and self-antigens in vivo.

Studies in Chapter 4 determined the predominant antigens that drive B cell IL-10 secretion during homeostasis. A novel in vitro B cell expansion system was used to isolate B cells actively expressing IL-10 in vivo and probe the reactivities of their secreted monoclonal antibodies. B10 cells were found to produce polyreactive antibodies that bound multiple self-antigens. Therefore, in the absence of overarching active immune responses, B cell IL-10 is secreted following interactions with self-antigens.

Chapter 5 of this dissertation investigated whether foreign antigens are capable of driving B10 cell expansion and effector activity during an active immune response. In a model of contact-induced hypersensitivity, in vitro B cell expansion was again used to isolate antigen-specific B10 clones, which were required for optimal immunosuppression.

The studies described in this dissertation shed light on the relative contributions of BCR-derived signals during B10 cell development and effector function. Furthermore, these investigations demonstrate that B10 cells respond to both foreign and self-antigens, which has important implications for the potential manipulation of B10 cells for human therapy. Therefore, B10 cells represent a polyreactive B cell population that provides antigen-specific regulation of immune responses via the production of IL-10.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates the design of optimal tax systems in dynamic environments. The first essay characterizes the optimal tax system where wages depend on stochastic shocks and work experience. In addition to redistributive and efficiency motives, the taxation of inexperienced workers depends on a second-best requirement that encourages work experience, a social insurance motive and incentive effects. Calibrations using U.S. data yield higher expected optimal marginal income tax rates for experienced workers for most of the inexperienced workers. They confirm that the average marginal income tax rate increases (decreases) with age when shocks and work experience are substitutes (complements). Finally, more variability in experienced workers' earnings prospects leads to increasing tax rates since income taxation acts as a social insurance mechanism. In the second essay, the properties of an optimal tax system are investigated in a dynamic private information economy where labor market frictions create unemployment that destroys workers' human capital. A two-skill type model is considered where wages and employment are endogenous. I find that the optimal tax system distorts the first-period wages of all workers below their efficient levels which leads to more employment. The standard no-distortion-at-the-top result no longer holds due to the combination of private information and the destruction of human capital. I show this result analytically under the Maximin social welfare function and confirm it numerically for a general social welfare function. I also investigate the use of a training program and job creation subsidies. The final essay analyzes the optimal linear tax system when there is a population of individuals whose perceptions of savings are linked to their disposable income and their family background through family cultural transmission. Aside from the standard equity/efficiency trade-off, taxes account for the endogeneity of perceptions through two channels. First, taxing labor decreases income, which decreases the perception of savings through time. Second, taxation on savings corrects for the misperceptions of workers and thus savings and labor decisions. Numerical simulations confirm that behavioral issues push labor income taxes upward to finance saving subsidies. Government transfers to individuals are also decreased to finance those same subsidies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A sufficiently complex set of molecules, if subject to perturbation, will self-organise and show emergent behaviour. If such a system can take on information it will become subject to natural selection. This could explain how self-replicating molecules evolved into life and how intelligence arose. A pivotal step in this evolutionary process was of course the emergence of the eukaryote and the advent of the mitochondrion, which both enhanced energy production per cell and increased the ability to process, store and utilise information. Recent research suggest that from its inception life embraced quantum effects such as “tunnelling” and “coherence” while competition and stressful conditions provided a constant driver for natural selection. We believe that the biphasic adaptive response to stress described by hormesis – a process that captures information to enable adaptability, is central to this whole process. Critically, hormesis could improve mitochondrial quantum efficiency, improving the ATP/ROS ratio, while inflammation, which is tightly associated with the aging process, might do the opposite. This all suggests that to achieve optimal health and healthy ageing, one has to sufficiently stress the system to ensure peak mitochondrial function, which itself could reflect selection of optimum efficiency at the quantum level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tidal stream turbines could have several direct impacts upon pursuit-diving seabirds foraging within tidal stream environments (mean horizontal current speeds > 2 ms−1), including collisions and displacement. Understanding how foraging seabirds respond to temporally variable but predictable hydrodynamic conditions immediately around devices could identify when interactions between seabirds and devices are most likely to occur; information which would quantify the magnitude of potential impacts, and also facilitate the development of suitable mitigation measures. This study uses shore-based observational surveys and Finite Volume Community Ocean Model outputs to test whether temporally predictable hydrodynamic conditions (horizontal current speeds, water elevation, turbulence) influenced the density of foraging black guillemots Cepphus grylle and European shags Phalacrocorax aristotelis in a tidal stream environment in Orkney, United Kingdom, during the breeding season. These species are particularly vulnerable to interactions with devices due to their tendency to exploit benthic and epi-benthic prey on or near the seabed. The density of both species decreased as a function of horizontal current speeds, whereas the density of black guillemots also decreased as a function of water elevation. These relationships could be linked to higher energetic costs of dives in particularly fast horizontal current speeds (>3 ms−1) and deeper water. Therefore, interactions between these species and moving components seem unlikely at particularly high horizontal current speeds. Combining this information, with that on the rotation rates of moving components at lower horizontal current speeds, could be used to assess collision risk in this site during breeding seasons. It is also likely that moderating any device operation during both lowest water elevation and lowest horizontal current speeds could reduce the risk of collisions for these species in this site during this season. The approaches used in this study could have useful applications within Environmental Impact Assessments, and should be considered when assessing and mitigating negative impacts from specific devices within development sites.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scavenger receptor BI (SR-BI) is the major receptor for high-density lipoprotein (HDL)
cholesterol (HDL-C). In humans, high amounts of HDL-C in plasma are associated with a
lower risk of coronary heart disease (CHD). Mice that have depleted Scarb1 (SR-BI
knockout mice) have markedly elevated HDL-C levels but, paradoxically, increased
atherosclerosis. The impact of SR-BI on HDL metabolism and CHD risk in humans remains
unclear. Through targeted sequencing of coding regions of lipid-modifying genes in 328
individuals with extremely high plasma HDL-C levels, we identified a homozygote for a lossof-function
variant, in which leucine replaces proline 376 (P376L), in SCARB1, the gene
encoding SR-BI. The P376L variant impairs posttranslational processing of SR-BI and
abrogates selective HDL cholesterol uptake in transfected cells, in hepatocyte-like cells
derived from induced pluripotent stem cells from the homozygous subject, and in mice.
Large population-based studies revealed that subjects who are heterozygous carriers of
the P376L variant have significantly increased levels of plasma HDL-C. P376L carriers have
a profound HDL-related phenotype and an increased risk of CHD (odds ratio = 1.79, which is
statistically significant).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cette thèse se compose de trois articles sur les politiques budgétaires et monétaires optimales. Dans le premier article, J'étudie la détermination conjointe de la politique budgétaire et monétaire optimale dans un cadre néo-keynésien avec les marchés du travail frictionnels, de la monnaie et avec distortion des taux d'imposition du revenu du travail. Dans le premier article, je trouve que lorsque le pouvoir de négociation des travailleurs est faible, la politique Ramsey-optimale appelle à un taux optimal d'inflation annuel significativement plus élevé, au-delà de 9.5%, qui est aussi très volatile, au-delà de 7.4%. Le gouvernement Ramsey utilise l'inflation pour induire des fluctuations efficaces dans les marchés du travail, malgré le fait que l'évolution des prix est coûteuse et malgré la présence de la fiscalité du travail variant dans le temps. Les résultats quantitatifs montrent clairement que le planificateur s'appuie plus fortement sur l'inflation, pas sur l'impôts, pour lisser les distorsions dans l'économie au cours du cycle économique. En effet, il ya un compromis tout à fait clair entre le taux optimal de l'inflation et sa volatilité et le taux d'impôt sur le revenu optimal et sa variabilité. Le plus faible est le degré de rigidité des prix, le plus élevé sont le taux d'inflation optimal et la volatilité de l'inflation et le plus faible sont le taux d'impôt optimal sur le revenu et la volatilité de l'impôt sur le revenu. Pour dix fois plus petit degré de rigidité des prix, le taux d'inflation optimal et sa volatilité augmentent remarquablement, plus de 58% et 10%, respectivement, et le taux d'impôt optimal sur le revenu et sa volatilité déclinent de façon spectaculaire. Ces résultats sont d'une grande importance étant donné que dans les modèles frictionnels du marché du travail sans politique budgétaire et monnaie, ou dans les Nouveaux cadres keynésien même avec un riche éventail de rigidités réelles et nominales et un minuscule degré de rigidité des prix, la stabilité des prix semble être l'objectif central de la politique monétaire optimale. En l'absence de politique budgétaire et la demande de monnaie, le taux d'inflation optimal tombe très proche de zéro, avec une volatilité environ 97 pour cent moins, compatible avec la littérature. Dans le deuxième article, je montre comment les résultats quantitatifs impliquent que le pouvoir de négociation des travailleurs et les coûts de l'aide sociale de règles monétaires sont liées négativement. Autrement dit, le plus faible est le pouvoir de négociation des travailleurs, le plus grand sont les coûts sociaux des règles de politique monétaire. Toutefois, dans un contraste saisissant par rapport à la littérature, les règles qui régissent à la production et à l'étroitesse du marché du travail entraînent des coûts de bien-être considérablement plus faible que la règle de ciblage de l'inflation. C'est en particulier le cas pour la règle qui répond à l'étroitesse du marché du travail. Les coûts de l'aide sociale aussi baisse remarquablement en augmentant la taille du coefficient de production dans les règles monétaires. Mes résultats indiquent qu'en augmentant le pouvoir de négociation du travailleur au niveau Hosios ou plus, les coûts de l'aide sociale des trois règles monétaires diminuent significativement et la réponse à la production ou à la étroitesse du marché du travail n'entraîne plus une baisse des coûts de bien-être moindre que la règle de ciblage de l'inflation, qui est en ligne avec la littérature existante. Dans le troisième article, je montre d'abord que la règle Friedman dans un modèle monétaire avec une contrainte de type cash-in-advance pour les entreprises n’est pas optimale lorsque le gouvernement pour financer ses dépenses a accès à des taxes à distorsion sur la consommation. Je soutiens donc que, la règle Friedman en présence de ces taxes à distorsion est optimale si nous supposons un modèle avec travaie raw-efficace où seule le travaie raw est soumis à la contrainte de type cash-in-advance et la fonction d'utilité est homothétique dans deux types de main-d'oeuvre et séparable dans la consommation. Lorsque la fonction de production présente des rendements constants à l'échelle, contrairement au modèle des produits de trésorerie de crédit que les prix de ces deux produits sont les mêmes, la règle Friedman est optimal même lorsque les taux de salaire sont différents. Si la fonction de production des rendements d'échelle croissant ou decroissant, pour avoir l'optimalité de la règle Friedman, les taux de salaire doivent être égales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the deregulated Power markets it is necessary to have a appropriate Transmission Pricing methodology that also takes into account “Congestion and Reliability”, in order to ensure an economically viable, equitable, and congestion free power transfer capability, with high reliability and security. This thesis presents results of research conducted on the development of a Decision Making Framework (DMF) of concepts and data analytic and modelling methods for the Reliability benefits Reflective Optimal “cost evaluation for the calculation of Transmission Cost” for composite power systems, using probabilistic methods. The methodology within the DMF devised and reported in this thesis, utilises a full AC Newton-Raphson load flow and a Monte-Carlo approach to determine, Reliability Indices which are then used for the proposed Meta-Analytical Probabilistic Approach (MAPA) for the evaluation and calculation of the Reliability benefit Reflective Optimal Transmission Cost (ROTC), of a transmission system. This DMF includes methods for transmission line embedded cost allocation among transmission transactions, accounting for line capacity-use as well as congestion costing that can be used for pricing using application of Power Transfer Distribution Factor (PTDF) as well as Bialek’s method to determine a methodology which consists of a series of methods and procedures as explained in detail in the thesis for the proposed MAPA for ROTC. The MAPA utilises the Bus Data, Generator Data, Line Data, Reliability Data and Customer Damage Function (CDF) Data for the evaluation of Congestion, Transmission and Reliability costing studies using proposed application of PTDF and other established/proven methods which are then compared, analysed and selected according to the area/state requirements and then integrated to develop ROTC. Case studies involving standard 7-Bus, IEEE 30-Bus and 146-Bus Indian utility test systems are conducted and reported throughout in the relevant sections of the dissertation. There are close correlation between results obtained through proposed application of PTDF method with the Bialek’s and different MW-Mile methods. The novel contributions of this research work are: firstly the application of PTDF method developed for determination of Transmission and Congestion costing, which are further compared with other proved methods. The viability of developed method is explained in the methodology, discussion and conclusion chapters. Secondly the development of comprehensive DMF which helps the decision makers to analyse and decide the selection of a costing approaches according to their requirements. As in the DMF all the costing approaches have been integrated to achieve ROTC. Thirdly the composite methodology for calculating ROTC has been formed into suits of algorithms and MATLAB programs for each part of the DMF, which are further described in the methodology section. Finally the dissertation concludes with suggestions for Future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La culture sous abris avec des infrastructures de type grands tunnels est une nouvelle technologie permettant d’améliorer la production de framboises rouges sous des climats nordiques. L’objectif principal de ce projet de doctorat était d’étudier les performances de ces technologies (grands tunnels vs. abris parapluie de type Voen, en comparaison à la culture en plein champ) et leur effets sur le microclimat, la photosynthèse, la croissance des plantes et le rendement en fruits pour les deux types de framboisiers non-remontants et remontants (Rubus idaeus, L.). Puisque les pratiques culturales doivent être adaptées aux différents environnements de culture, la taille d’été (pour le cultivar non-remontant), l’optimisation de la densité des tiges (pour le cultivar remontant) et l’utilisation de bâches réfléchissantes (pour les deux types des framboisiers) ont été étudiées sous grands tunnels, abris Voen vs. en plein champ. Les plants cultivés sous grands tunnels produisent en moyenne 1,2 et 1,5 fois le rendement en fruits commercialisables que ceux cultivés sous abri Voen pour le cv. non-remontant ‘Jeanne d’Orléans’ et le cv. remontant ‘Polka’, respectivement. Comparativement aux framboisiers cultivés aux champs, le rendement en fruits des plants sous grands tunnels était plus du double pour le cv. ‘Jeanne d’Orléans’ et près du triple pour le cv. ‘Polka’. L’utilisation de bâches réfléchissantes a entrainé un gain significatif sur le rendement en fruits de 12% pour le cv. ‘Jeanne d’Orléans’ et de 17% pour le cv. ‘Polka’. La taille des premières ou deuxièmes pousses a significativement amélioré le rendement en fruits du cv. ‘Jeanne d’Orléans’ de 26% en moyenne par rapport aux framboisiers non taillés. Des augmentations significatives du rendement en fruits de 43% et 71% du cv. ‘Polka’ ont été mesurées avec l’accroissement de la densité à 4 et 6 tiges par pot respectivement, comparativement à deux tiges par pot. Au cours de la période de fructification du cv. ‘Jeanne d’Orléans’, les bâches réfléchissantes ont augmenté significativement la densité de flux photonique photosynthétique (DFPP) réfléchie à la canopée inférieure de 80% en plein champ et de 60% sous grands tunnels, comparativement à seulement 14% sous abri Voen. Durant la saison de fructification du cv. ‘Polka’, un effet positif de bâches sur la lumière réfléchie (jusqu’à 42%) a été mesuré seulement en plein champ. Dans tous les cas, les bâches réfléchissantes n’ont présenté aucun effet significatif sur la DFPP incidente foliaire totale et la photosynthèse. Pour le cv. ‘Jeanne d’Orléans’, la DFPP incidente sur la feuille a été atténuée d’environ 46% sous le deux types de revêtement par rapport au plein champ. Par conséquent, la photosynthèse a été réduite en moyenne de 43% sous grands tunnels et de 17% sous abris Voen. Des effets similaires ont été mesurés pour la DFPP incidente et la photosynthèse avec le cv. Polka. En dépit du taux de photosynthèse des feuilles individuelles systématiquement inférieur à ceux mesurés pour les plants cultivés aux champs, la photosynthèse de la plante entière sous grands tunnels était de 51% supérieure à celle observée au champ pour le cv. ‘Jeanne d’Orléans’, et 46% plus élevée pour le cv. ‘Polka’. Ces résultats s’expliquent par une plus grande (près du double) surface foliaire pour les plants cultivés sous tunnels, qui a compensé pour le plus faible taux de photosynthèse par unité de surface foliaire. Les températures supra-optimales des feuilles mesurées sous grands tunnels (6.6°C plus élevé en moyenne que dans le champ), ainsi que l’atténuation de la DFPP incidente (env. 43%) par les revêtements de tunnels ont contribué à réduire le taux de photosynthèse par unité de surface foliaire. La photosynthèse de la canopée entière était étroitement corrélée avec le rendement en fruits pour les deux types de framboisiers rouges cultivés sous grands tunnels ou en plein champ.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Creative ways of utilising renewable energy sources in electricity generation especially in remote areas and particularly in countries depending on imported energy, while increasing energy security and reducing cost of such isolated off-grid systems, is becoming an urgently needed necessity for the effective strategic planning of Energy Systems. The aim of this research project was to design and implement a new decision support framework for the optimal design of hybrid micro grids considering different types of different technologies, where the design objective is to minimize the total cost of the hybrid micro grid while at the same time satisfying the required electric demand. Results of a comprehensive literature review, of existing analytical, decision support tools and literature on HPS, has identified the gaps and the necessary conceptual parts of an analytical decision support framework. As a result this research proposes and reports an Iterative Analytical Design Framework (IADF) and its implementation for the optimal design of an Off-grid renewable energy based hybrid smart micro-grid (OGREH-SμG) with intra and inter-grid (μG2μG & μG2G) synchronization capabilities and a novel storage technique. The modelling design and simulations were based on simulations conducted using HOMER Energy and MatLab/SIMULINK, Energy Planning and Design software platforms. The design, experimental proof of concept, verification and simulation of a new storage concept incorporating Hydrogen Peroxide (H2O2) fuel cell is also reported. The implementation of the smart components consisting Raspberry Pi that is devised and programmed for the semi-smart energy management framework (a novel control strategy, including synchronization capabilities) of the OGREH-SμG are also detailed and reported. The hybrid μG was designed and implemented as a case study for the Bayir/Jordan area. This research has provided an alternative decision support tool to solve Renewable Energy Integration for the optimal number, type and size of components to configure the hybrid μG. In addition this research has formulated and reported a linear cost function to mathematically verify computer based simulations and fine tune the solutions in the iterative framework and concluded that such solutions converge to a correct optimal approximation when considering the properties of the problem. As a result of this investigation it has been demonstrated that, the implemented and reported OGREH-SμG design incorporates wind and sun powered generation complemented with batteries, two fuel cell units and a diesel generator is a unique approach to Utilizing indigenous renewable energy with a capability of being able to synchronize with other μ-grids is the most effective and optimal way of electrifying developing countries with fewer resources in a sustainable way, with minimum impact on the environment while also achieving reductions in GHG. The dissertation concludes with suggested extensions to this work in the future.