941 resultados para Economies of scale


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report presents a metric called FRESH (for Foodservice Impact Rating for Environmentally Sustainable Hospitality Events). FRESH can be used to evaluate the performance of any foodservice meal period or event in the hospitality sector with regards to its sustainability, based on seven measurements. These measures are: a (post-consumer) food-waste indicator, a no-show indicator (when unexpectedly few people show up), an over-show measure (when too many people show up), a planning indicator (measuring intentional overproduction), a portion-size indicator (measuring per-guest consumption against expectations), an economies of scale indicator, and a post-event indicator (which depends on disposal approaches). FRESH can help managers, authorities, and potential guests evaluate the sustainability of food production in any establishment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cette thèse examine le rôle du pouvoir de marché dans le marché bancaire. L’emphase est mis sur la prise de risque, les économies d’échelle, l’efficacité économique du marché et la transmission des chocs. Le premier chapitre présente un modèle d’équilibre général dynamique stochastique en économie ouverte comprenant un marché bancaire en concurrence monopolistique. Suivant l’hypothèse de Krugman (1979, 1980) sur la relation entre les économies d’échelle et les exportations, les banques doivent défrayer un coût de transaction pour échanger à l’étranger qui diminue à mesure que le volume de leurs activités locales augmente. Cela incite les banques à réduire leur marge locale afin de profiter davantage du marché extérieur. Le modèle est solutionné et simulé pour divers degrés de concentration dans le marché bancaire. Les résultats obtenus indiquent que deux forces contraires, les économies d’échelle et le pouvoir de marché, s’affrontent lorsque le marché se concentre. La concentration permet aussi aux banques d’accroître leurs activités étrangères, ce qui les rend en contrepartie plus vulnérables aux chocs extérieurs. Le deuxième chapitre élabore un cadre de travail semblable, mais à l’intérieur duquel les banques font face à un risque de crédit. Celui-ci est partiellement assuré par un collatéral fourni par les entrepreneurs et peut être limité à l’aide d’un effort financier. Le modèle est solutionné et simulé pour divers degrés de concentration dans le marché bancaire. Les résultats montrent qu’un plus grand pouvoir de marché réduit la taille du marché financier et de la production à l’état stationnaire, mais incite les banques à prendre moins de risques. De plus, les économies dont le marché bancaire est fortement concentré sont moins sensibles à certains chocs puisque les marges plus élevés donnent initialement de la marge de manoeuvre aux banques en cas de chocs négatifs. Cet effet modérateur est éliminé lorsqu’il est possible pour les banques d’entrer et de sortir librement du marché. Une autre extension avec économies d’échelle montre que sous certaines conditions, un marché moyennement concentré est optimal pour l’économie. Le troisième chapitre utilise un modèle en analyse de portefeuille de type Moyenne-Variance afin de représenter une banque détenant du pouvoir de marché. Le rendement des dépôts et des actifs peut varier selon la quantité échangée, ce qui modifie le choix de portefeuille de la banque. Celle-ci tend à choisir un portefeuille dont la variance est plus faible lorsqu’elle est en mesure d’obtenir un rendement plus élevé sur un actif. Le pouvoir de marché sur les dépôts amène un résultat sembable pour un pouvoir de marché modéré, mais la variance finit par augmenter une fois un certain niveau atteint. Les résultats sont robustes pour différentes fonctions de demandes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pk-yritykset ovat nousseet merkittäväksi puheenaiheeksi viime vuosina niiden työllistävän vaikutuksen vuoksi. Iso osa uusista työpaikoista on syntynyt pk-yrityksiin eikä suuriin yrityksiin. Samalla kansainvälistymiskehitys on ollut aikamme suurimpia muutosvoimia, jotka vaikuttavat talouselämään. Kansainvälistymällä yritykset ovat saaneet uusia väyliä hankkia rahoitusta, laajentaneet ja monipuolistaneet asiakaskuntaansa ja hankkineet skaalaetuja. Kansainvälisyys aiheuttaa haasteita yritysten taloudelliselle raportoinnille, joten pk-yrityksille luotiin pk-IFRS –standardisto. Sen tarkoituksena on parantaa vertailukelpoisuutta eri maista tulevien yritysten välillä ja keventää raportoinnin raskautta. Tässä tutkielmassa selvitettiin, miten pk-IFRS:n soveltaminen vaikuttaa tilinpäätösten tekijöihin ja käyttäjiin. Tutkielma alkaa teoria-osuudesta, jossa käytiin läpi tuoreita tutkimuksia, minkä pohjalta esiin nousseita kysymyksiä käyttäen tehtiin empiria-osuudessa analysoitava haastattelu. Tutkimuksessa saatiin selville, että tutkijoiden ja haastateltavan näkemykset pk-IFRS:tä ovat pääosin samanlaiset. Pk-IFRS:n ongelmia on sen lyhyys, mikä aiheuttaa tulkintaongelmia ja monimutkaisuus, joka kuormittaa raportoijan resursseja. Pk-yritykset toimivat useimmin vain kotimaassa, jolloin hyödyt jäävät rajallisiksi. Pienemmissä pk-yrityksissä tilinpäätöksiä käyttäviä sidosryhmiä on vähän, jolloin käyttäjien saamat hyödyt ovat pienet. Kuitenkin käypään arvoon arvostus ja tiedon annon vaatimukset voivat helpottaa rahoituksen saantia joillekin yrityksille, jos pk-IFRS:ää aletaan soveltaa samalla tavalla joka maassa ja yrityksessä.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents approximation algorithms for some NP-Hard combinatorial optimization problems on graphs and networks; in particular, we study problems related to Network Design. Under the widely-believed complexity-theoretic assumption that P is not equal to NP, there are no efficient (i.e., polynomial-time) algorithms that solve these problems exactly. Hence, if one desires efficient algorithms for such problems, it is necessary to consider approximate solutions: An approximation algorithm for an NP-Hard problem is a polynomial time algorithm which, for any instance of the problem, finds a solution whose value is guaranteed to be within a multiplicative factor of the value of an optimal solution to that instance. We attempt to design algorithms for which this factor, referred to as the approximation ratio of the algorithm, is as small as possible. The field of Network Design comprises a large class of problems that deal with constructing networks of low cost and/or high capacity, routing data through existing networks, and many related issues. In this thesis, we focus chiefly on designing fault-tolerant networks. Two vertices u,v in a network are said to be k-edge-connected if deleting any set of k − 1 edges leaves u and v connected; similarly, they are k-vertex connected if deleting any set of k − 1 other vertices or edges leaves u and v connected. We focus on building networks that are highly connected, meaning that even if a small number of edges and nodes fail, the remaining nodes will still be able to communicate. A brief description of some of our results is given below. We study the problem of building 2-vertex-connected networks that are large and have low cost. Given an n-node graph with costs on its edges and any integer k, we give an O(log n log k) approximation for the problem of finding a minimum-cost 2-vertex-connected subgraph containing at least k nodes. We also give an algorithm of similar approximation ratio for maximizing the number of nodes in a 2-vertex-connected subgraph subject to a budget constraint on the total cost of its edges. Our algorithms are based on a pruning process that, given a 2-vertex-connected graph, finds a 2-vertex-connected subgraph of any desired size and of density comparable to the input graph, where the density of a graph is the ratio of its cost to the number of vertices it contains. This pruning algorithm is simple and efficient, and is likely to find additional applications. Recent breakthroughs on vertex-connectivity have made use of algorithms for element-connectivity problems. We develop an algorithm that, given a graph with some vertices marked as terminals, significantly simplifies the graph while preserving the pairwise element-connectivity of all terminals; in fact, the resulting graph is bipartite. We believe that our simplification/reduction algorithm will be a useful tool in many settings. We illustrate its applicability by giving algorithms to find many trees that each span a given terminal set, while being disjoint on edges and non-terminal vertices; such problems have applications in VLSI design and other areas. We also use this reduction algorithm to analyze simple algorithms for single-sink network design problems with high vertex-connectivity requirements; we give an O(k log n)-approximation for the problem of k-connecting a given set of terminals to a common sink. We study similar problems in which different types of links, of varying capacities and costs, can be used to connect nodes; assuming there are economies of scale, we give algorithms to construct low-cost networks with sufficient capacity or bandwidth to simultaneously support flow from each terminal to the common sink along many vertex-disjoint paths. We further investigate capacitated network design, where edges may have arbitrary costs and capacities. Given a connectivity requirement R_uv for each pair of vertices u,v, the goal is to find a low-cost network which, for each uv, can support a flow of R_uv units of traffic between u and v. We study several special cases of this problem, giving both algorithmic and hardness results. In addition to Network Design, we consider certain Traveling Salesperson-like problems, where the goal is to find short walks that visit many distinct vertices. We give a (2 + epsilon)-approximation for Orienteering in undirected graphs, achieving the best known approximation ratio, and the first approximation algorithm for Orienteering in directed graphs. We also give improved algorithms for Orienteering with time windows, in which vertices must be visited between specified release times and deadlines, and other related problems. These problems are motivated by applications in the fields of vehicle routing, delivery and transportation of goods, and robot path planning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter explores the relationship between environmental conflicts and technical progress, trying to understand, in the case of large mines of the Iberian Pyrite Belt, in Alentejo, how emerging environmental problems conditioned the performance or led to the search for alternative technical solutions, taking as chronological limit for this observation the beginning of World War II. In the absence of the archives of the companies, the research was based on existing administrative documents in the state archives (mining engineers reports, the licensing of mining activities), on reports and documents published in specialized mining press, in particular, the Bulletin of the Ministry of Public Works, Trade and Industry, the Journal of Public Works, Trade and Industry (both in Portuguese), and finally in the local press. Despite that limitation, the information available shows that in global competition markets, the success of the British enterprise in Santo Domingo had the active search for new technical solutions for the creation and adaptation of existing knowledge to local problems in order to maximize the mineral resources available. The early development of the hydrometallurgical processes for the treatment of poor ores, named ‘natural cementation’, can be explained as the way these companies tried to solve problems of competitiveness, boosting economies of scale. Thus, they transferred the environmental costs previously limited to agriculture to more fragile social groups, the poor fishermen of Guadiana River and of Vila Real de Santo António. Therefore, the hydrometallurgy of pyrites was developed locally, pioneered in Santo Domingo that allowed the survival and expansion of the British company from the late 1870s, that is, at a time when most small mines shut since they were not able to compete globally. Through different consented and regulated processes (judicial), through conflict or parliamentary mediation, the State imposed exceptionally additional costs to companies, either for compensation, the imposing the application of remediation measures to reduce the environmental damage in some cases, thus contributing to derail some projects. These cases suggest that the interaction between local conflicts, corporate behavior and technological progress proves to be complex. This article aims to contribute to the debate on economic and social history between the environment and technological progress, arguing that the fixed costs and economic imponderable social risks were factors that encouraged the companies to search for new solutions and to introduce innovations since that would allow the expansion of their activity. In this process the companies sometimes faced environmental dilemmas and unforeseen costs with consequences on the economy of firms. The nature of the knowledge needed to address the environmental problems they created, however, is of a very different nature from that knowledge needed to face the environmental burdens that were inherent to the development of its activity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We apply the collective consumption model of Browning, Chiappori and Lewbel (2006) to analyse economic well-being and poverty among the elderly. The model focuses on individual preferences, a consumption technology that captures the economies of scale of living in a couple, and a sharing rule that governs the intra-household allocation of resources. The model is applied to a time series of Dutch consumption expenditure surveys. Our empirical results indicate substantial economies of scale and a wifeís share that is increasing in total expenditures. We further calculated poverty rates by means of the collective consumption model. Collective poverty rates of widows and widowers turn out to be slightly lower than traditional ones based on a standard equivalence scale. Poverty among women (men) in elderly couples, however, seems to be heavily underestimated (overestimated) by the traditional approach. Finally, we analysed the impact of becoming a widow(er). Based on cross-sectional evidence, we find that the drop (increase) in material well-being following the husbandís death is substantial for women in high (low) expenditure couples. For men, the picture is reversed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O presente trabalho de investigação aplicada subordinado ao tema “Aquisição dos bens e serviços ao nível do Exército” visa descrever que medidas podem ser implementadas no sistema aquisitivo do Exército para minimizar os custos e melhorar a racionalização de diversos recursos. O presente trabalho estrutura-se em duas partes. A primeira parte consiste numa exposição teórica, na qual são abordados diversos temas como as “compras”, a descentralização e centralização, a regência da Administração Pública ao nível da contratação pública. Numa segunda parte, é definido o trabalho de campo, no qual são descritas as entrevistas realizadas com intuito de obter informações no que concerne a metodologias, fluxos e procedimentos adotados por parte dos ramos das Forças Armadas e Guarda Nacional Republicana, bem como a análise estatística das aquisições dos bens e serviços de 2015 Este trabalho foi realizado com recurso a uma metodologia hipotético-dedutiva, tendo a sua realização permitido o esclarecimento de hipóteses previamente colocadas. A investigação concluiu que o sistema aquisitivo do Exército possui muitas fragilidades que o impossibilitam de usufruir das vantagens que a centralização acarreta, nomeadamente as economias de escala. Tal deve-se a diversas lacunas existentes, particularmente a falta de planeamento.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Établir une régulation de l’économie numérique au Sénégal représente un enjeu fondamental pour les gouvernants et l’ensemble des acteurs qui la compose. Suivant une démarche plus globalisée, d’énormes mutations normatives visant les rationalités et les mécanismes de réglementations ont évolué dans le temps donnant une place plus considérable au droit dans les politiques publiques des États. Différents modèles normatifs et institutionnels sont ainsi adaptés pour prendre en charge le phénomène de la convergence dépendamment du contexte réglementaire du pays. Pour ce qui est du contexte actuel du Sénégal, l’étanchéité des réglementations relatives aux télécommunications et à l’audiovisuel, désormais convergent, est fondée sur un modèle de réglementation sectorielle. Toutefois, leur convergence a provoqué un brouillage des frontières qui risque désormais de poser des conséquences énormes sur le plan normatif tel que des risques d’enchevêtrement sur le plan institutionnel ou réglementaire. Or au plan national, il n’existe à ce jour aucun texte visant à assoir les bases d’une régulation convergente. Ainsi, à la question de savoir si la régulation sectorielle est pertinente au regard de l’environnement du numérique marqué par la convergence, il s’est avéré qu’elle pourrait être adoptée comme modèle à court terme. Mais dans un but de réaliser des économies d’échelle pour réguler efficacement les différents secteurs et industries infrastructurelles, il faut un modèle de régulation unique marquée par la fusion de l’ARTP et du CNRA. D’une part, la régulation sectorielle permet d’accompagner la transition vers le numérique déjà lancée et d’autre part la régulation multisectorielle servira une fois la convergence des marchés établis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern data centers host hundreds of thousands of servers to achieve economies of scale. Such a huge number of servers create challenges for the data center network (DCN) to provide proportionally large bandwidth. In addition, the deployment of virtual machines (VMs) in data centers raises the requirements for efficient resource allocation and find-grained resource sharing. Further, the large number of servers and switches in the data center consume significant amounts of energy. Even though servers become more energy efficient with various energy saving techniques, DCN still accounts for 20% to 50% of the energy consumed by the entire data center. The objective of this dissertation is to enhance DCN performance as well as its energy efficiency by conducting optimizations on both host and network sides. First, as the DCN demands huge bisection bandwidth to interconnect all the servers, we propose a parallel packet switch (PPS) architecture that directly processes variable length packets without segmentation-and-reassembly (SAR). The proposed PPS achieves large bandwidth by combining switching capacities of multiple fabrics, and it further improves the switch throughput by avoiding padding bits in SAR. Second, since certain resource demands of the VM are bursty and demonstrate stochastic nature, to satisfy both deterministic and stochastic demands in VM placement, we propose the Max-Min Multidimensional Stochastic Bin Packing (M3SBP) algorithm. M3SBP calculates an equivalent deterministic value for the stochastic demands, and maximizes the minimum resource utilization ratio of each server. Third, to provide necessary traffic isolation for VMs that share the same physical network adapter, we propose the Flow-level Bandwidth Provisioning (FBP) algorithm. By reducing the flow scheduling problem to multiple stages of packet queuing problems, FBP guarantees the provisioned bandwidth and delay performance for each flow. Finally, while DCNs are typically provisioned with full bisection bandwidth, DCN traffic demonstrates fluctuating patterns, we propose a joint host-network optimization scheme to enhance the energy efficiency of DCNs during off-peak traffic hours. The proposed scheme utilizes a unified representation method that converts the VM placement problem to a routing problem and employs depth-first and best-fit search to find efficient paths for flows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Établir une régulation de l’économie numérique au Sénégal représente un enjeu fondamental pour les gouvernants et l’ensemble des acteurs qui la compose. Suivant une démarche plus globalisée, d’énormes mutations normatives visant les rationalités et les mécanismes de réglementations ont évolué dans le temps donnant une place plus considérable au droit dans les politiques publiques des États. Différents modèles normatifs et institutionnels sont ainsi adaptés pour prendre en charge le phénomène de la convergence dépendamment du contexte réglementaire du pays. Pour ce qui est du contexte actuel du Sénégal, l’étanchéité des réglementations relatives aux télécommunications et à l’audiovisuel, désormais convergent, est fondée sur un modèle de réglementation sectorielle. Toutefois, leur convergence a provoqué un brouillage des frontières qui risque désormais de poser des conséquences énormes sur le plan normatif tel que des risques d’enchevêtrement sur le plan institutionnel ou réglementaire. Or au plan national, il n’existe à ce jour aucun texte visant à assoir les bases d’une régulation convergente. Ainsi, à la question de savoir si la régulation sectorielle est pertinente au regard de l’environnement du numérique marqué par la convergence, il s’est avéré qu’elle pourrait être adoptée comme modèle à court terme. Mais dans un but de réaliser des économies d’échelle pour réguler efficacement les différents secteurs et industries infrastructurelles, il faut un modèle de régulation unique marquée par la fusion de l’ARTP et du CNRA. D’une part, la régulation sectorielle permet d’accompagner la transition vers le numérique déjà lancée et d’autre part la régulation multisectorielle servira une fois la convergence des marchés établis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Inspired in dynamic systems theory and Brewer’s contributions to apply it to economics, this paper establishes a bond graph model. Two main variables, a set of inter-connectivities based on nodes and links (bonds) and a fractional order dynamical perspective, prove to be a good macro-economic representation of countries’ potential performance in nowadays globalization. The estimations based on time series for 50 countries throughout the last 50 decades confirm the accuracy of the model and the importance of scale for economic performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Construction professional service (CPS) firms sell expertise and provide innovative solutions for projects founded on their knowledge, experience, and technical competences. Large CPS firms seeking to grow will often seek new opportunities in their domestic market and overseas by organic or inorganic growth through mergers, alliances, and acquisitions. Growth can also come from increasing market penetration through vertical, horizontal, and lateral diversification. Such growth, hopefully, leads to economies of scope and scale in the long term, but it can also lead to diseconomies, when the added cost of integration and the increased complexity of diversification no longer create tangible and intangible benefits. The aim of this research is to investigate the key influences impacting on the growth in scope and scale for large CPS firms. Qualitative data from the interviews were underpinned by secondary data from CPS firms’ annual reports and analysts’ findings. The findings showed five key influences on the scope and scale of a CPS firm: the importance of growth as a driver; the influence of the ownership of the firm on the decision for growth in scope and scale; the optimization of resources and capabilities; the need to serve changing clients’ needs; and the importance of localization. The research provides valuable insights into the growth strategies of international CPS firms. A major finding of the research is the influence of ownership on CPS firms’ growth strategies which has not been highlighted in previous research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The spatial pattern of outbreaks of pink wax scale, Ceroplastes rubens Maskell, within and among umbrella trees, Schefflera actinophylla (Endl.), in southeastern Queensland was investigated. Pink wax scale was common on S. actinophylla, with approximately 84% of trees positive for scale and 14% of bees recording outbreak densities exceeding 0.4 adults per leaflet. Highly aggregated distributions of C. rubens occur within and among umbrella trees. Clumped distributions within trees appear to result from variable birth and death rates and limited movement of first instar crawlers. The patchy distribution of pink wax scale among trees is probably a consequence of variation in dispersal success of scale, host and environmental suitability for establishment and rates of biological control. Pink wax scale was more prevalent on trees in roadside positions and in exposed situations, indicating that such trees are more suitable and/or susceptible to scale colonisation.