913 resultados para Trade off


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Personal information is increasingly gathered and used for providing services tailored to user preferences, but the datasets used to provide such functionality can represent serious privacy threats if not appropriately protected. Work in privacy-preserving data publishing targeted privacy guarantees that protect against record re-identification, by making records indistinguishable, or sensitive attribute value disclosure, by introducing diversity or noise in the sensitive values. However, most approaches fail in the high-dimensional case, and the ones that don’t introduce a utility cost incompatible with tailored recommendation scenarios. This paper aims at a sensible trade-off between privacy and the benefits of tailored recommendations, in the context of privacy-preserving data publishing. We empirically demonstrate that significant privacy improvements can be achieved at a utility cost compatible with tailored recommendation scenarios, using a simple partition-based sanitization method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the context of active control of rotating machines, standard optimal controller methods enable a trade-off to be made between (weighted) mean-square vibrations and (weighted) mean-square currents injected into magnetic bearings. One shortcoming of such controllers is that no concern is devoted to the voltages required. In practice, the voltage available imposes a strict limitation on the maximum possible rate of change of control force (force slew rate). This paper removes the aforementioned existing shortcomings of traditional optimal control.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Herbicide runoff from cropping fields has been identified as a threat to the Great Barrier Reef ecosystem. A field investigation was carried out to monitor the changes in runoff water quality resulting from four different sugarcane cropping systems that included different herbicides and contrasting tillage and trash management practices. These include (i) Conventional - Tillage (beds and inter-rows) with residual herbicides used; (ii) Improved - only the beds were tilled (zonal) with reduced residual herbicides used; (iii) Aspirational - minimum tillage (one pass of a single tine ripper before planting) with trash mulch, no residual herbicides and a legume intercrop after cane establishment; and (iv) New Farming System (NFS) - minimum tillage as in Aspirational practice with a grain legume rotation and a combination of residual and knockdown herbicides. Results suggest soil and trash management had a larger effect on the herbicide losses in runoff than the physico-chemical properties of herbicides. Improved practices with 30% lower atrazine application rates than used in conventional systems produced reduced runoff volumes by 40% and atrazine loss by 62%. There were a 2-fold variation in atrazine and >10-fold variation in metribuzin loads in runoff water between reduced tillage systems differing in soil disturbance and surface residue cover from the previous rotation crops, despite the same herbicide application rates. The elevated risk of offsite losses from herbicides was illustrated by the high concentrations of diuron (14mugL-1) recorded in runoff that occurred >2.5months after herbicide application in a 1st ratoon crop. A cropping system employing less persistent non-selective herbicides and an inter-row soybean mulch resulted in no residual herbicide contamination in runoff water, but recorded 12.3% lower yield compared to Conventional practice. These findings reveal a trade-off between achieving good water quality with minimal herbicide contamination and maintaining farm profitability with good weed control.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conservation of the seven lagoons of the Palavas complex (southern France) has been severely impaired by nutrient over-enrichment during at least four decades. The effluents of the Montpellier wastewater treatment plant (WWTP) represented the main nutrient input. To improve the water quality of these lagoons, this WWTP was renovated and upgraded and, since the end of 2005, its effluents have been discharged 11 km offshore into the Mediterranean (total investment €150 M). Possibilities of ecosystem restoration as part of a conservation programme were explored by a focus group of experts. Their tasks were: (i) to evaluate the impact of the reduction of the nutrient input; (ii) if necessary, to design additional measures for an active restoration programme; and (iii) to predict ecosystem trajectories for the different cases. Extension of Magnoliophyta meadows can be taken as a proxy for ecosystem restoration as they favour the increase of several fish (seahorse) and bird (ducks, swans, herons) species, albeit they represent a trade-off for greater flamingos. Additional measures for active ecosystem restoration were only recommended for the most impaired lagoon Méjean, while the least impaired lagoon Ingril is already on a trajectory of spontaneous recovery. A multiple contingent valuation considering four different management options for the Méjean lagoon was used in a pilot study based on face-to-face interviews with 159 respondents. Three levels of ecosystem restoration were expressed in terms of recovery of Magnoliophyta meadows, including their impact on emblematic fish and avifauna. These were combined with different options for access (status quo, increasing access, increasing access with measures to reduce disturbance). The results show a willingness of local populations to pay per year about €25 for the highest level of ecological restoration, while they were only willing to allocate about €5 for additional footpaths and hides.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Crop models are simplified mathematical representations of the interacting biological and environmental components of the dynamic soil–plant–environment system. Sorghum crop modeling has evolved in parallel with crop modeling capability in general, since its origins in the 1960s and 1970s. Here we briefly review the trajectory in sorghum crop modeling leading to the development of advanced models. We then (i) overview the structure and function of the sorghum model in the Agricultural Production System sIMulator (APSIM) to exemplify advanced modeling concepts that suit both agronomic and breeding applications, (ii) review an example of use of sorghum modeling in supporting agronomic management decisions, (iii) review an example of the use of sorghum modeling in plant breeding, and (iv) consider implications for future roles of sorghum crop modeling. Modeling and simulation provide an avenue to explore consequences of crop management decision options in situations confronted with risks associated with seasonal climate uncertainties. Here we consider the possibility of manipulating planting configuration and density in sorghum as a means to manipulate the productivity–risk trade-off. A simulation analysis of decision options is presented and avenues for its use with decision-makers discussed. Modeling and simulation also provide opportunities to improve breeding efficiency by either dissecting complex traits to more amenable targets for genetics and breeding, or by trait evaluation via phenotypic prediction in target production regions to help prioritize effort and assess breeding strategies. Here we consider studies on the stay-green trait in sorghum, which confers yield advantage in water-limited situations, to exemplify both aspects. The possible future roles of sorghum modeling in agronomy and breeding are discussed as are opportunities related to their synergistic interaction. The potential to add significant value to the revolution in plant breeding associated with genomic technologies is identified as the new modeling frontier.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Changes to homelessness legislation in post-devolution Scotland have resulted in an expansion of rights for homeless households seeking formal assistance from local authorities. These changes have led to Scotland’s homelessness arrangements being considered among the most progressive in Europe. In recent years, however, the Scottish Government has increasingly promoted homelessness prevention and Housing Options approaches as a means by which homelessness might be avoided or resolved without recourse to statutory rights. As part of that, they have promoted greater use of the private rented sector (PRS) as a key housing option, with the potential to meet the needs of homeless households. The arguments made to support use of the PRS have much in common with arguments for privatisation in other areas of social policy, notably greater choice for the individual promoting better welfare outcomes, and competition among providers encouraging improvements in quality of service provision. Critics have argued that such benefits may not be realised and that, on the contrary, privatisation may lead to exclusion or act to worsen households’ outcomes. This thesis considers the extent to which the PRS has been utilised in Scotland to accommodate homeless households, and the consequences of this for their welfare. The thesis uses a combination of quantitative and qualitative methods. To examine trends in the use of the PRS, it presents quantitative analysis of the data on the operation of the statutory system and Housing Options arrangements, and of data from a survey of local authority homelessness strategy officers. To examine the consequences of this for homeless households, the thesis uses qualitative research involving face-to-face interviews with 35 homeless households across three local authority areas. This research considers the extent to which households’ experiences of homelessness, housing need and the PRS reflect the arguments presented in the literature, and how settled accommodation has impacted on households’ ability to participate fully in society. The research found an increasing but still limited role for the PRS in resolving statutory homelessness in Scotland, with indications that the PRS is being increasingly used as part of the Housing Options approach and as a means of resolving homelessness outside the statutory system. The PRS is being utilised to varying degrees across different local authority areas, and a variety of methods are being used to do so. While local authorities saw clear advantages to making greater use of the sector, a number of significant barriers including affordability, available stock and landlord preferences - made this difficult in practice. Research with previously homeless households in the PRS similarly found broadly positive experiences and views of the sector, particularly with regard to enabling households to access good quality accommodation in desirable areas of their choosing, with many households highlighting improvements relating to social inclusion and participation. Nevertheless, concerns around the security of tenure offered by the sector, repairs, service standards and unequal power relations between landlord and tenant persisted. As such, homeless households frequently expressed their decision to enter the sector in terms of a trade-off between choice and security.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Alors que les activités anthropiques font basculer de nombreux écosystèmes vers des régimes fonctionnels différents, la résilience des systèmes socio-écologiques devient un problème pressant. Des acteurs locaux, impliqués dans une grande diversité de groupes — allant d’initiatives locales et indépendantes à de grandes institutions formelles — peuvent agir sur ces questions en collaborant au développement, à la promotion ou à l’implantation de pratiques plus en accord avec ce que l’environnement peut fournir. De ces collaborations répétées émergent des réseaux complexes, et il a été montré que la topologie de ces réseaux peut améliorer la résilience des systèmes socio-écologiques (SSÉ) auxquels ils participent. La topologie des réseaux d’acteurs favorisant la résilience de leur SSÉ est caractérisée par une combinaison de plusieurs facteurs : la structure doit être modulaire afin d’aider les différents groupes à développer et proposer des solutions à la fois plus innovantes (en réduisant l’homogénéisation du réseau), et plus proches de leurs intérêts propres ; elle doit être bien connectée et facilement synchronisable afin de faciliter les consensus, d’augmenter le capital social, ainsi que la capacité d’apprentissage ; enfin, elle doit être robuste, afin d’éviter que les deux premières caractéristiques ne souffrent du retrait volontaire ou de la mise à l’écart de certains acteurs. Ces caractéristiques, qui sont relativement intuitives à la fois conceptuellement et dans leur application mathématique, sont souvent employées séparément pour analyser les qualités structurales de réseaux d’acteurs empiriques. Cependant, certaines sont, par nature, incompatibles entre elles. Par exemple, le degré de modularité d’un réseau ne peut pas augmenter au même rythme que sa connectivité, et cette dernière ne peut pas être améliorée tout en améliorant sa robustesse. Cet obstacle rend difficile la création d’une mesure globale, car le niveau auquel le réseau des acteurs contribue à améliorer la résilience du SSÉ ne peut pas être la simple addition des caractéristiques citées, mais plutôt le résultat d’un compromis subtil entre celles-ci. Le travail présenté ici a pour objectifs (1), d’explorer les compromis entre ces caractéristiques ; (2) de proposer une mesure du degré auquel un réseau empirique d’acteurs contribue à la résilience de son SSÉ ; et (3) d’analyser un réseau empirique à la lumière, entre autres, de ces qualités structurales. Cette thèse s’articule autour d’une introduction et de quatre chapitres numérotés de 2 à 5. Le chapitre 2 est une revue de la littérature sur la résilience des SSÉ. Il identifie une série de caractéristiques structurales (ainsi que les mesures de réseaux qui leur correspondent) liées à l’amélioration de la résilience dans les SSÉ. Le chapitre 3 est une étude de cas sur la péninsule d’Eyre, une région rurale d’Australie-Méridionale où l’occupation du sol, ainsi que les changements climatiques, contribuent à l’érosion de la biodiversité. Pour cette étude de cas, des travaux de terrain ont été effectués en 2010 et 2011 durant lesquels une série d’entrevues a permis de créer une liste des acteurs de la cogestion de la biodiversité sur la péninsule. Les données collectées ont été utilisées pour le développement d’un questionnaire en ligne permettant de documenter les interactions entre ces acteurs. Ces deux étapes ont permis la reconstitution d’un réseau pondéré et dirigé de 129 acteurs individuels et 1180 relations. Le chapitre 4 décrit une méthodologie pour mesurer le degré auquel un réseau d’acteurs participe à la résilience du SSÉ dans lequel il est inclus. La méthode s’articule en deux étapes : premièrement, un algorithme d’optimisation (recuit simulé) est utilisé pour fabriquer un archétype semi-aléatoire correspondant à un compromis entre des niveaux élevés de modularité, de connectivité et de robustesse. Deuxièmement, un réseau empirique (comme celui de la péninsule d’Eyre) est comparé au réseau archétypique par le biais d’une mesure de distance structurelle. Plus la distance est courte, et plus le réseau empirique est proche de sa configuration optimale. La cinquième et dernier chapitre est une amélioration de l’algorithme de recuit simulé utilisé dans le chapitre 4. Comme il est d’usage pour ce genre d’algorithmes, le recuit simulé utilisé projetait les dimensions du problème multiobjectif dans une seule dimension (sous la forme d’une moyenne pondérée). Si cette technique donne de très bons résultats ponctuellement, elle n’autorise la production que d’une seule solution parmi la multitude de compromis possibles entre les différents objectifs. Afin de mieux explorer ces compromis, nous proposons un algorithme de recuit simulé multiobjectifs qui, plutôt que d’optimiser une seule solution, optimise une surface multidimensionnelle de solutions. Cette étude, qui se concentre sur la partie sociale des systèmes socio-écologiques, améliore notre compréhension des structures actorielles qui contribuent à la résilience des SSÉ. Elle montre que si certaines caractéristiques profitables à la résilience sont incompatibles (modularité et connectivité, ou — dans une moindre mesure — connectivité et robustesse), d’autres sont plus facilement conciliables (connectivité et synchronisabilité, ou — dans une moindre mesure — modularité et robustesse). Elle fournit également une méthode intuitive pour mesurer quantitativement des réseaux d’acteurs empiriques, et ouvre ainsi la voie vers, par exemple, des comparaisons d’études de cas, ou des suivis — dans le temps — de réseaux d’acteurs. De plus, cette thèse inclut une étude de cas qui fait la lumière sur l’importance de certains groupes institutionnels pour la coordination des collaborations et des échanges de connaissances entre des acteurs aux intérêts potentiellement divergents.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes that Brazil could improve the political accountability by breaking up many of the statewide districts it uses to elect its deputies into smaller districts, each electing fewer deputies. The central argument is that districts that elect low-to-moderate numbers of legislators make it possible to optimize the well-known trade-off between inclusive representation and accountable government.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The European sea bass (Dicentrarchus labrax) is an economically important fish native to the Mediterranean and Northern Atlantic. Its complex life cycle involves many migrations through temperature gradients that affect the energetic demands of swimming. Previous studies have shown large intraspecific variation in swimming performance and temperature tolerance, which could include deleterious and advantageous traits under the evolutionary pressure of climate change. However, little is known of the underlying determinants of this individual variation. We investigated individual variation in temperature tolerance in 30 sea bass by exposing them to a warm temperature challenge test. The eight most temperature-tolerant and eight most temperature-sensitive fish were then studied further to determine maximal swimming speed (U-CAT), aerobic scope and post-exercise oxygen consumption. Finally, ventricular contractility in each group was determined using isometric muscle preparations. The temperature-tolerant fish showed lower resting oxygen consumption rates, possessed larger hearts and initially recovered from exhaustive exercise faster than the temperature-sensitive fish. Thus, whole-animal temperature tolerance was associated with important performance traits. However, the temperature-tolerant fish also demonstrated poorer maximal swimming capacity (i.e. lower UCAT) than their temperature-sensitive counterparts, which may indicate a trade-off between temperature tolerance and swimming performance. Interestingly, the larger relative ventricular mass of the temperature-tolerant fish did not equate to greater ventricular contractility, suggesting that larger stroke volumes, rather than greater contractile strength, may be associated with thermal tolerance in this species.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Automotive producers are aiming to make their order fulfilment processes more flexible. Opening the pipeline of planned products for dynamic allocation to dealers/ customers is a significant step to be more flexible but the behaviour of such Virtual-Build-To-Order systems are complex to predict and their performance varies significantly as product variety levels change. This study investigates the potential for intelligent control of the pipeline feed, taking into account the current status of inventory (level and mix) and of the volume and mix of unsold products in the planning pipeline, as well as the demand profile. Five ‘intelligent’ methods for selecting the next product to be planned into the production pipeline are analysed using a discrete event simulation model and compared to the unintelligent random feed. The methods are tested under two conditions, firstly when customers must be fulfilled with the exact product they request, and secondly when customers trade-off a shorter waiting time for compromise in specification. The two forms of customer behaviour have a substantial impact on the performance of the methods and there are also significant differences between the methods themselves. When the producer has an accurate model of customer demand, methods that attempt to harmonise the mix in the system to the demand distribution are superior.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Understanding how imperfect information affects firms' investment decision helps answer important questions in economics, such as how we may better measure economic uncertainty; how firms' forecasts would affect their decision-making when their beliefs are not backed by economic fundamentals; and how important are the business cycle impacts of changes in firms' productivity uncertainty in an environment of incomplete information. This dissertation provides a synthetic answer to all these questions, both empirically and theoretically. The first chapter, provides empirical evidence to demonstrate that survey-based forecast dispersion identifies a distinctive type of second moment shocks different from the canonical volatility shocks to productivity, i.e. uncertainty shocks. Such forecast disagreement disturbances can affect the distribution of firm-level beliefs regardless of whether or not belief changes are backed by changes in economic fundamentals. At the aggregate level, innovations that increase the dispersion of firms' forecasts lead to persistent declines in aggregate investment and output, which are followed by a slow recovery. On the contrary, the larger dispersion of future firm-specific productivity innovations, the standard way to measure economic uncertainty, delivers the ``wait and see" effect, such that aggregate investment experiences a sharp decline, followed by a quick rebound, and then overshoots. At the firm level, data uncovers that more productive firms increase investments given rises in productivity dispersion for the future, whereas investments drop when firms disagree more about the well-being of their future business conditions. These findings challenge the view that the dispersion of the firms' heterogeneous beliefs captures the concept of economic uncertainty, defined by a model of uncertainty shocks. The second chapter presents a general equilibrium model of heterogeneous firms subject to the real productivity uncertainty shocks and informational disagreement shocks. As firms cannot perfectly disentangle aggregate from idiosyncratic productivity because of imperfect information, information quality thus drives the wedge of difference between the unobserved productivity fundamentals, and the firms' beliefs about how productive they are. Distribution of the firms' beliefs is no longer perfectly aligned with the distribution of firm-level productivity across firms. This model not only explains why, at the macro and micro level, disagreement shocks are different from uncertainty shocks, as documented in Chapter 1, but helps reconcile a key challenge faced by the standard framework to study economic uncertainty: a trade-off between sizable business cycle effects due to changes in uncertainty, and the right amount of pro-cyclicality of firm-level investment rate dispersion, as measured by its correlation with the output cycles.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Natural language processing has achieved great success in a wide range of ap- plications, producing both commercial language services and open-source language tools. However, most methods take a static or batch approach, assuming that the model has all information it needs and makes a one-time prediction. In this disser- tation, we study dynamic problems where the input comes in a sequence instead of all at once, and the output must be produced while the input is arriving. In these problems, predictions are often made based only on partial information. We see this dynamic setting in many real-time, interactive applications. These problems usually involve a trade-off between the amount of input received (cost) and the quality of the output prediction (accuracy). Therefore, the evaluation considers both objectives (e.g., plotting a Pareto curve). Our goal is to develop a formal understanding of sequential prediction and decision-making problems in natural language processing and to propose efficient solutions. Toward this end, we present meta-algorithms that take an existent batch model and produce a dynamic model to handle sequential inputs and outputs. Webuild our framework upon theories of Markov Decision Process (MDP), which allows learning to trade off competing objectives in a principled way. The main machine learning techniques we use are from imitation learning and reinforcement learning, and we advance current techniques to tackle problems arising in our settings. We evaluate our algorithm on a variety of applications, including dependency parsing, machine translation, and question answering. We show that our approach achieves a better cost-accuracy trade-off than the batch approach and heuristic-based decision- making approaches. We first propose a general framework for cost-sensitive prediction, where dif- ferent parts of the input come at different costs. We formulate a decision-making process that selects pieces of the input sequentially, and the selection is adaptive to each instance. Our approach is evaluated on both standard classification tasks and a structured prediction task (dependency parsing). We show that it achieves similar prediction quality to methods that use all input, while inducing a much smaller cost. Next, we extend the framework to problems where the input is revealed incremen- tally in a fixed order. We study two applications: simultaneous machine translation and quiz bowl (incremental text classification). We discuss challenges in this set- ting and show that adding domain knowledge eases the decision-making problem. A central theme throughout the chapters is an MDP formulation of a challenging problem with sequential input/output and trade-off decisions, accompanied by a learning algorithm that solves the MDP.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cette thèse se compose de trois articles sur les politiques budgétaires et monétaires optimales. Dans le premier article, J'étudie la détermination conjointe de la politique budgétaire et monétaire optimale dans un cadre néo-keynésien avec les marchés du travail frictionnels, de la monnaie et avec distortion des taux d'imposition du revenu du travail. Dans le premier article, je trouve que lorsque le pouvoir de négociation des travailleurs est faible, la politique Ramsey-optimale appelle à un taux optimal d'inflation annuel significativement plus élevé, au-delà de 9.5%, qui est aussi très volatile, au-delà de 7.4%. Le gouvernement Ramsey utilise l'inflation pour induire des fluctuations efficaces dans les marchés du travail, malgré le fait que l'évolution des prix est coûteuse et malgré la présence de la fiscalité du travail variant dans le temps. Les résultats quantitatifs montrent clairement que le planificateur s'appuie plus fortement sur l'inflation, pas sur l'impôts, pour lisser les distorsions dans l'économie au cours du cycle économique. En effet, il ya un compromis tout à fait clair entre le taux optimal de l'inflation et sa volatilité et le taux d'impôt sur le revenu optimal et sa variabilité. Le plus faible est le degré de rigidité des prix, le plus élevé sont le taux d'inflation optimal et la volatilité de l'inflation et le plus faible sont le taux d'impôt optimal sur le revenu et la volatilité de l'impôt sur le revenu. Pour dix fois plus petit degré de rigidité des prix, le taux d'inflation optimal et sa volatilité augmentent remarquablement, plus de 58% et 10%, respectivement, et le taux d'impôt optimal sur le revenu et sa volatilité déclinent de façon spectaculaire. Ces résultats sont d'une grande importance étant donné que dans les modèles frictionnels du marché du travail sans politique budgétaire et monnaie, ou dans les Nouveaux cadres keynésien même avec un riche éventail de rigidités réelles et nominales et un minuscule degré de rigidité des prix, la stabilité des prix semble être l'objectif central de la politique monétaire optimale. En l'absence de politique budgétaire et la demande de monnaie, le taux d'inflation optimal tombe très proche de zéro, avec une volatilité environ 97 pour cent moins, compatible avec la littérature. Dans le deuxième article, je montre comment les résultats quantitatifs impliquent que le pouvoir de négociation des travailleurs et les coûts de l'aide sociale de règles monétaires sont liées négativement. Autrement dit, le plus faible est le pouvoir de négociation des travailleurs, le plus grand sont les coûts sociaux des règles de politique monétaire. Toutefois, dans un contraste saisissant par rapport à la littérature, les règles qui régissent à la production et à l'étroitesse du marché du travail entraînent des coûts de bien-être considérablement plus faible que la règle de ciblage de l'inflation. C'est en particulier le cas pour la règle qui répond à l'étroitesse du marché du travail. Les coûts de l'aide sociale aussi baisse remarquablement en augmentant la taille du coefficient de production dans les règles monétaires. Mes résultats indiquent qu'en augmentant le pouvoir de négociation du travailleur au niveau Hosios ou plus, les coûts de l'aide sociale des trois règles monétaires diminuent significativement et la réponse à la production ou à la étroitesse du marché du travail n'entraîne plus une baisse des coûts de bien-être moindre que la règle de ciblage de l'inflation, qui est en ligne avec la littérature existante. Dans le troisième article, je montre d'abord que la règle Friedman dans un modèle monétaire avec une contrainte de type cash-in-advance pour les entreprises n’est pas optimale lorsque le gouvernement pour financer ses dépenses a accès à des taxes à distorsion sur la consommation. Je soutiens donc que, la règle Friedman en présence de ces taxes à distorsion est optimale si nous supposons un modèle avec travaie raw-efficace où seule le travaie raw est soumis à la contrainte de type cash-in-advance et la fonction d'utilité est homothétique dans deux types de main-d'oeuvre et séparable dans la consommation. Lorsque la fonction de production présente des rendements constants à l'échelle, contrairement au modèle des produits de trésorerie de crédit que les prix de ces deux produits sont les mêmes, la règle Friedman est optimal même lorsque les taux de salaire sont différents. Si la fonction de production des rendements d'échelle croissant ou decroissant, pour avoir l'optimalité de la règle Friedman, les taux de salaire doivent être égales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La oferta del microcrédito por parte de las instituciones microfinancieras distingue dos enfoques, el llamado “alivio de la pobreza” y el de “autosostenibilidad financiera”. En este trabajo se hace una comparativa mundial de cuatro regiones de países en vías de desarrollo con el fin de identificar bajo qué enfoque la población accede al microcrédito y a su vez verificar si existe un trade off entre ambos enfoques. Para su contrastación se han utilizado diez indicadores financieros y sociales clasificados en dos categorías, la sostenibilidad financiera y el alcance (nivel de pobreza). Los resultados obtenidos concluyen que el modelo financiero se presenta con más notoriedad en las regiones de América Latina y el Caribe y Oriente Medio/África del Norte mientras que en África prevalece el modelo social y en la región de Asia Meridional se aprecia un equilibrio entre ambos enfoques. Igualmente, se constata la contribución al alivio de la pobreza cuando el acceso del microcrédito se dirige mayoritariamente a la mujer.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study aims to investigate the influence of the asset class and the breakdown of tangibility as determinant factors of the capital structure of companies listed on the BM & FBOVESPA in the period of 2008-2012. Two current assets classes were composed and once they were grouped by liquidity, they were also analyzed by the financial institutions for credit granting: current resources (Cash, Bank and Financial Applications) and operations with duplicates (Stocks and Receivables). The breakdown of the tangible assets was made based on its main components provided as warrantees for loans like Machinery & Equipment and Land & Buildings. For an analysis extension, three metrics for leverage (accounting, financial and market) were applied and the sample was divided into economic sectors, adopted by BM&FBOVESPA. The data model in dynamic panel estimated by a systemic GMM of two levels was used in this study due its strength to problems of endogenous relationship as well as the omitted variables bias. The found results suggest that current resources are determinants of the capital structure possibly because they re characterized as proxies for financial solvency, being its relationship with debt positive. The sectorial analysis confirmed the results for current resources. The tangibility of assets has inverse proportional relationship with the leverage. As it is disintegrated in its main components, the significant and negative influence of machinery & equipment was more marked in the Industrial Goods sector. This result shows that, on average, the most specific assets from operating activities of a company compete for a less use of third party resources. As complementary results, it was observed that the leverage has persistence, which is linked with the static trade-off theory. Specifically for financial leverage, it was observed that the persistence is relevant when it is controlled for the lagged current assets classes variables. The proxy variable for growth opportunities, measured by the Market -to -Book, has the sign of its contradictory coefficient. The company size has a positive relationship with debt, in favor of static trade-off theory. Profitability is the most consistent variable in all the performed estimations, showing strong negative and significant relationship with leverage, as the pecking order theory predicts