906 resultados para Returns to scale
Resumo:
Various sources indicate that threats to modern cities lie in the availability of essential streams, among which energy. Most cities are strongly reliant on fossil fuels; not one case of a fully self-sufficient city is known. Engineering resilience is the rate at which a system returns to a single steady or cyclic state following a perturbation. Certain resilience, for the duration of a crisis, would improve the urban capability to survive such a period without drastic measures.
The capability of cities to prepare for and respond to energy crises in the near future is supported by greater or temporary self-sufficiency. The objective of the underlying research is a model for a city – including its surrounding rural area – that can sustain energy crises. Therefore, accurate monitoring of the current urban metabolism is needed for the use of energy. This can be used to pinpoint problem areas. Furthermore, a sustainable energy system is needed, in which the cycle is better closed. This will require a three-stepped approach of energy savings, energy exchange and sustainable energy generation. Essential is the capacity to store energy surpluses for periods of shortage (crises).
The paper discusses the need for resilient cities and the approach to make cities resilient to energy crises.
Resumo:
Ten years after the production of the initial 'We Never Give Up' film, this documentary filmis a follow-up film about the experiences of ten survivors of South Africa apartheid and their struggle for reparations. Produced by the Human Rights Media Centre, Cape Town, the film was directed and filmed by Cahal McLaughlin in a collaborative relationship with Khulumani Support Group Western Cape.
Further Information:
This documentary film, produced with the Human Rights Media Centre, Cape Town, and in collaboration with Khulumani Support Group Western Cape, is the ten-year follow up to We Never Give Up (2002), which addressed the issues of reparations as dealt with by the South African government and the Truth and Reconciliation Commission. We Never Give Up II (2012) returns to these themes and to the same participants, asking how life has changed in the interim. The process of collaborative practices acknowledges the importance of sharing ownership/authorship in the storytelling processes as well as in validating traumatic experiences by those who survived major and sustained political violence. Made over a two-year period, involving close consultation with participants, the film offers insights, by those most directly affected, to what might constitute legal, financial, social and psychological reparations. The film has been screened in Cape Town, Bloemfontain, Zanzibar Film Festival, Belfast (Belfast Film Festival), Brighton, Guildford, Galway and London, always accompanied by discussion of the issues raised in Q&As. To emphasise the importance of the film for debates on policy around reparations, a 25 minute edited version was selected to be screened on SABC on ‘Special Assignment’ by SABC on April 29th, 2013 (South Africa’s ‘Freedom Day’), followed by a debate with Department of Justice spokesperson, Dr Khotso De Wee. The chapter 'Maureen Never Gave Up' in Daniels, McLaughlin and Pearce (eds.) 'Truth, Dare or Promise' (2013) Cambridge Scholars Press (ISBN: 1-4438-4959-6, ISBN 13: 978-1-4438-4959-3, Release Date: 2013-09-01), which analyses the production of this film, is offered as part of the portfolio.
Resumo:
Diagrammatic many-body theory is used to calculate the scattering phase shifts, normalized annihilation rates Zeff, and annihilation ? spectra for positron collisions with the hydrogenlike ions He+, Li2+, B4+, and F8+. Short-range electron-positron correlations and longer-range positron-ion correlations are accounted for by evaluating nonlocal corrections to the annihilation vertex and the exact positron self-energy. The numerical calculation of the many-body theory diagrams is performed using B-spline basis sets. To elucidate the role of the positron-ion repulsion, the annihilation rate is also estimated analytically in the Coulomb-Born approximation. It is found that the energy dependence and magnitude of Zeff are governed by the Gamow factor that characterizes the suppression of the positron wave function near the ion. For all of the H-like ions, the correlation enhancement of the annihilation rate is found to be predominantly due to corrections to the annihilation vertex, while the corrections to the positron wave function play only a minor role. Results of the calculations for s-, p-, and d-wave incident positrons of energies up to the positronium-formation threshold are presented. Where comparison is possible, our values are in excellent agreement with the results obtained using other, e.g., variational, methods. The annihilation-vertex enhancement factors obtained in the present calculations are found to scale approximately as 1+(1.6+0.46l)/Zi, where Zi is the net charge of the ion and l is the positron orbital angular momentum. Our results for positron annihilation in H-like ions provide insights into the problem of positron annihilation with core electrons in atoms and condensed matter systems, which have similar binding energies.
Resumo:
Modeling dynamical systems represents an important application class covering a wide range of disciplines including but not limited to biology, chemistry, finance, national security, and health care. Such applications typically involve large-scale, irregular graph processing, which makes them difficult to scale due to the evolutionary nature of their workload, irregular communication and load imbalance. EpiSimdemics is such an application simulating epidemic diffusion in extremely large and realistic social contact networks. It implements a graph-based system that captures dynamics among co-evolving entities. This paper presents an implementation of EpiSimdemics in Charm++ that enables future research by social, biological and computational scientists at unprecedented data and system scales. We present new methods for application-specific processing of graph data and demonstrate the effectiveness of these methods on a Cray XE6, specifically NCSA's Blue Waters system.
Resumo:
The combination of ionic liquids (ILs) and supercritical CO2 (scCO2) allows efficient catalytic processes to be developed. Catalyst separation is generally a major challenge when enzymes or homogeneous organometallic catalysts are utilised for reactions, and IL–scCO2 systems address these separation problems, facilitating the recycling or continual use of the catalyst. Typically these systems involve a catalyst being dissolved in an IL and this is where it remains during the process, with scCO2 extracting the products from the IL (catalyst) phase. ILs and many catalysts are not soluble in scCO2 and this facilitates the clean separation of products from the catalyst and IL. When the pressure is reduced in a collection chamber, the scCO2 returns to CO2 gas and products can be obtained without contamination of catalyst or solvents. It is possible to operate IL–scCO2 systems in a continuous flow manner and this further improves the efficiency and industrial potential of these systems. This chapter will introduce the fundamental properties of these multiphase catalytic systems. It will also highlight key examples of catalytic processes from the academic literature which illustrate the benefits of utilising this combination of solvents for catalysis
Resumo:
Background: Large-scale randomised controlled trials are relatively rare in education. The present study approximates to, but is not exactly, a randomised controlled trial. It was an attempt to scale up previous small peer tutoring projects, while investing only modestly in continuing professional development for teachers.Purpose: A two-year study of peer tutoring in reading was undertaken in one local education authority in Scotland. The relative effectiveness of cross-age versus same-age tutoring, light versus intensive intervention, and reading versus reading and mathematics tutoring were investigated.Programme description (if relevant): The intervention was Paired Reading, a freely available cross-ability tutoring method applied to books of the pupils' choice but above the tutee's independent readability level. It involves Reading Together and Reading Alone, and switching from one to the other according to need.Sample: Eighty-seven primary schools of overall average socio-economic status, ability and gender in one council in Scotland. There were few ethnic minority students. Proportions of students with special needs were low. Children were eight and 10 years old as the intervention started. Macro-evaluation n = 3520. Micro-evaluation Year 1 15 schools n = 592, Year 2 a different 15 schools n = 591, compared with a comparison group of five schools n = 240.Design and methods: Almost all the primary schools in the local authority participated and were randomly allocated to condition. A macro-evaluation tested and retested over a two-year period using Performance Indicators in Primary Schools. A micro-evaluation tested and retested within each year using norm-referenced tests of reading comprehension. Macro-evaluation was with multi-level modelling, micro-evaluation with descriptive statistics and effect sizes, analysis of variance (ANOVA) and multivariate analysis of variance (MANOVA).Results: Macro-evaluation yielded significant pre-post gains in reading attainment for cross-age tutoring over both years. No other differences were significant. Micro-evaluation yielded pre-post changes in Year 1 (selected) and Year 2 (random) greater than controls, with no difference between same-age and cross-age tutoring. Light and intensive tutoring were equally effective. Tutoring reading and mathematics together was more effective than only tutoring reading. Lower socio-economic and lower reading ability students did better. Girls did better than boys. Regarding observed implementation quality, some factors were high and others low. Few implementation variables correlated with attainment gain.Conclusions: Paired Reading tutoring does lead to better reading attainment compared with students not participating. This is true in the long term (macro-evaluation) for cross-age tutoring, and in the short term (micro-evaluation) for both cross-age and same-age tutoring. Tutors and tutees benefited. Intensity had no effect but dual tutoring did have an effect. Low-socio-economic status, low-ability and female students did better. The results of the different forms of evaluation were indeed different. There are implications for practice and for future research. © 2012 Copyright Taylor and Francis Group, LLC.
Resumo:
Increasingly large amounts of data are stored in main memory of data center servers. However, DRAM-based memory is an important consumer of energy and is unlikely to scale in the future. Various byte-addressable non-volatile memory (NVM) technologies promise high density and near-zero static energy, however they suffer from increased latency and increased dynamic energy consumption.
This paper proposes to leverage a hybrid memory architecture, consisting of both DRAM and NVM, by novel, application-level data management policies that decide to place data on DRAM vs. NVM. We analyze modern column-oriented and key-value data stores and demonstrate the feasibility of application-level data management. Cycle-accurate simulation confirms that our methodology reduces the energy with least performance degradation as compared to the current state-of-the-art hardware or OS approaches. Moreover, we utilize our techniques to apportion DRAM and NVM memory sizes for these workloads.
Resumo:
This paper considers a general equilibrium theory of a competitive market economy with an endogenous social division of labour. The theory is founded on the notion of a “consumer- producer”, who consumes as well as produces commodities. In this approach, the emergence of a meaningful social division of labour is guided by the property of increasing returns to specialisation and the process of trade among fully specialised individuals. All decisions of individual consumer-producers are based on a set of perfectly competitive market prices of the commodities in the economy.
We show that a perfectly competitive price mechanism supports a dichotomy of production and consumption at the level of the individual consumer-producer. In this context we show existence of competitive equilibria and characterise these equilibria under increasing returns to specialisation: Under certain well-described conditions, markets are equilibrated through adjustment of the social division of labour; therefore prices are fully determined by the supply side of the economy.
Resumo:
Introduction: Neuropeptides contribute to the pathophysiology of peripheral inflammation and a neurogenic component has been described for many inflammatory diseases, including periodontitis. Neuropeptides are susceptible to cleavage by peptidases and therefore the exact location and level of expression of peptidases are major determinants of neuropeptide action. Previous studies by our research group suggested that levels of the neuropeptide calcitonin gene-related peptide (CGRP) may be regulated by peptidases present in gingival crevicular fluid (GCF). Objectives: The aim of this work was to purify and partially characterize the GCF enzyme responsible for CGRP degradation using a biotinylated hydroxymate affinity probe (based on the P1-P4 amino acid sequence of the observed cleavage site) which we previously showed to inhibit CGRP degradation. Methods: Pooled healthy and pooled periodontitis GCF samples were subject to a pre-clear step with magnetic streptavadin beads. Healthy and diseased samples were incubated with the biotinylated hydroxymate probe (20 uM) after which biotinylated proteins were purified from the sample using magnetic streptavadin beads. Bound proteins were subjected to SDS-PAGE and western blotting. Biotin incorporated proteins were disclosed using a streptavadin horse radish peroxidase conjugate. Results: A band was disclosed in the periodontitis pooled sample at a molecular weight of approximately 60 kDa. The band was absent in the pooled healthy samples. As expected, when periodontitis samples were pre-boiled to denature proteins before the addition of the hydroxymate probe, no biotin incorporated band was present. Conclusions: This work demonstrates the purification and disclosure of a protein found specifically in periodontitis which binds to the specific biotinylated hydroxymate affinity probe based on the cleavage site of CGRP only when in its native form. We intend to scale up the sample size thus allowing the identification of the putative CGRP degrading peptidase using MALDI-mass spectrometry.
Funded by an IADR/GlaxoSmithKline Innovation in Oral Care Award
Resumo:
Many-body theory is developed to calculate the γ spectra for positron annihilation in noble-gas atoms. Inclusion of electron-positron correlation effects and core annihilation gives spectra in excellent agreement with experiment [K. Iwata et al., Phys. Rev. Lett. 79, 39 (1997)]. The calculated correlation enhancement factors γnl for individual electron orbitals nl are found to scale with the ionization energy Inl (in eV), as γnl=1+ √A/Inl+(B/Inl)β, where A≈40 eV, B≈24 eV, and β≈2.3.
Resumo:
Dissertação de Mestrado, Ciências da Educação, Especialização em Educação de Infância, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve; Escola Superior de Educação, Instituto Politécnico de Lisboa, 2007
Resumo:
Dissertação de Mestrado, Gestão da Água e da Costa, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2007
Estudo das possibilidades de resultados anormais através de estratégias momentum no mercado Euronext
Resumo:
Dissertação de mestrado, Finanças Empresariais, Faculdade de Economia, Universidade do Algarve, 2014
Resumo:
[Updated August 2016] The Hotel Valuation Software, freely available from Cornell’s Center for Hospitality Research, has been updated to reflect the many changes in the 11th Edition of the Uniform System of Accounts for the Lodging Industry (USALI). Version 4.0 of the Hotel Valuation Software provides numerous enhancements over the original tool from 2011. In addition to a significant increase in functionality and an update to reflect the 11th edition of the USALI, Version 4.0 takes advantage of the power of the latest release of Microsoft Excel®. Note that Version 4.0 works only on a PC running Microsoft Windows, it does not work on a Mac running OS X. Users desiring an OS X compatible version should click here (Labeled as Version 2.5). 酒店评估软件手册和三个程序(点击这里 ) Users desiring a Mandarin version of the Hotel Valuation Software should click here The Hotel Valuation Software remains the only non-proprietary computer software designed specifically to assist in the preparation of market studies, forecasts of income and expense, and valuations for lodging property. The software provides an accurate, consistent, and cost-effective way for hospitality professionals to forecast occupancy, revenues and expenses and to perform hotel valuations. Version 4.0 of the Hotel Valuation Software includes the following upgrades – a complete update to reflect the 11th edition of the USALI – the most significant change to the chart of accounts in a generation, an average daily rate forecasting tool, a much more sophisticated valuation module, and an optional valuation tool useful in periods of limited capital liquidity. Using established methodology, the Hotel Valuation Software is a sophisticated tool for lodging professionals. The tool consists of three separate software programs written as Microsoft Excel files and a software users' guide. The tool is provided through the generosity of HVS and the School of Hotel Administration. The three software modules are: Room Night Analysis and Average Daily Rate: Enables the analyst to evaluate the various competitive factors such as occupancy, average room rate, and market segmentation for competitive hotels in a local market. Calculates the area-wide occupancy and average room rate, as well as the competitive market mix. Produce a forecast of occupancy and average daily rate for existing and proposed hotels in a local market. The program incorporates such factors as competitive occupancies, market segmentation, unaccommodated demand, latent demand, growth of demand, and the relative competitiveness of each property in the local market. The program outputs include ten-year projections of occupancy and average daily rate. Fixed and Variable Revenue and Expense Analysis: The key to any market study and valuation is a supportable forecast of revenues and expenses. Hotel revenue and expenses are comprised of many different components that display certain fixed and variable relationships to each other. This program enables the analyst to input comparable financial operating data and forecast a complete 11-year income and expense statement by defining a small set of inputs: The expected future occupancy levels for the subject hotel Base year operating data for the subject hotel Fixed and variable relationships for revenues and expenses Expected inflation rates for revenues and expenses Hotel Capitalization Software: A discounted cash flow valuation model utilizing the mortgage-equity technique forms the basis for this program. Values are produced using three distinct underwriting criteria: A loan-to-value ratio, in which the size of the mortgage is based on property value. A debt coverage ratio (also known as a debt-service coverage ratio), in which the size of the mortgage is based on property level cash flow, mortgage interest rate, and mortgage amortization. A debt yield, in which the size of the mortgage is based on property level cash flow. By entering the terms of typical lodging financing, along with a forecast of revenue and expense, the program determines the value that provides the stated returns to the mortgage and equity components. The program allows for a variable holding period from four to ten years The program includes an optional model useful during periods of capital market illiquidity that assumes a property refinancing during the holding period
Resumo:
A simple but effective technique to improve the performance of the Max-Log-MAP algorithm is to scale the extrinsic information exchanged between two MAP decoders. A comprehensive analysis of the selection of the scaling factors according to channel conditions and decoding iterations is presented in this paper. Choosing a constant scaling factor for all SNRs and iterations is compared with the best scaling factor selection for changing channel conditions and decoding iterations. It is observed that a constant scaling factor for all channel conditions and decoding iterations is the best solution and provides a 0.2-0.4 dB gain over the standard Max- Log-MAP algorithm. Therefore, a constant scaling factor should be chosen for the best compromise.