813 resultados para zone-based policy
Resumo:
La riduzione dei consumi di combustibili fossili e lo sviluppo di tecnologie per il risparmio energetico sono una questione di centrale importanza sia per l’industria che per la ricerca, a causa dei drastici effetti che le emissioni di inquinanti antropogenici stanno avendo sull’ambiente. Mentre un crescente numero di normative e regolamenti vengono emessi per far fronte a questi problemi, la necessità di sviluppare tecnologie a basse emissioni sta guidando la ricerca in numerosi settori industriali. Nonostante la realizzazione di fonti energetiche rinnovabili sia vista come la soluzione più promettente nel lungo periodo, un’efficace e completa integrazione di tali tecnologie risulta ad oggi impraticabile, a causa sia di vincoli tecnici che della vastità della quota di energia prodotta, attualmente soddisfatta da fonti fossili, che le tecnologie alternative dovrebbero andare a coprire. L’ottimizzazione della produzione e della gestione energetica d’altra parte, associata allo sviluppo di tecnologie per la riduzione dei consumi energetici, rappresenta una soluzione adeguata al problema, che può al contempo essere integrata all’interno di orizzonti temporali più brevi. L’obiettivo della presente tesi è quello di investigare, sviluppare ed applicare un insieme di strumenti numerici per ottimizzare la progettazione e la gestione di processi energetici che possa essere usato per ottenere una riduzione dei consumi di combustibile ed un’ottimizzazione dell’efficienza energetica. La metodologia sviluppata si appoggia su un approccio basato sulla modellazione numerica dei sistemi, che sfrutta le capacità predittive, derivanti da una rappresentazione matematica dei processi, per sviluppare delle strategie di ottimizzazione degli stessi, a fronte di condizioni di impiego realistiche. Nello sviluppo di queste procedure, particolare enfasi viene data alla necessità di derivare delle corrette strategie di gestione, che tengano conto delle dinamiche degli impianti analizzati, per poter ottenere le migliori prestazioni durante l’effettiva fase operativa. Durante lo sviluppo della tesi il problema dell’ottimizzazione energetica è stato affrontato in riferimento a tre diverse applicazioni tecnologiche. Nella prima di queste è stato considerato un impianto multi-fonte per la soddisfazione della domanda energetica di un edificio ad uso commerciale. Poiché tale sistema utilizza una serie di molteplici tecnologie per la produzione dell’energia termica ed elettrica richiesta dalle utenze, è necessario identificare la corretta strategia di ripartizione dei carichi, in grado di garantire la massima efficienza energetica dell’impianto. Basandosi su un modello semplificato dell’impianto, il problema è stato risolto applicando un algoritmo di Programmazione Dinamica deterministico, e i risultati ottenuti sono stati comparati con quelli derivanti dall’adozione di una più semplice strategia a regole, provando in tal modo i vantaggi connessi all’adozione di una strategia di controllo ottimale. Nella seconda applicazione è stata investigata la progettazione di una soluzione ibrida per il recupero energetico da uno scavatore idraulico. Poiché diversi layout tecnologici per implementare questa soluzione possono essere concepiti e l’introduzione di componenti aggiuntivi necessita di un corretto dimensionamento, è necessario lo sviluppo di una metodologia che permetta di valutare le massime prestazioni ottenibili da ognuna di tali soluzioni alternative. Il confronto fra i diversi layout è stato perciò condotto sulla base delle prestazioni energetiche del macchinario durante un ciclo di scavo standardizzato, stimate grazie all’ausilio di un dettagliato modello dell’impianto. Poiché l’aggiunta di dispositivi per il recupero energetico introduce gradi di libertà addizionali nel sistema, è stato inoltre necessario determinare la strategia di controllo ottimale dei medesimi, al fine di poter valutare le massime prestazioni ottenibili da ciascun layout. Tale problema è stato di nuovo risolto grazie all’ausilio di un algoritmo di Programmazione Dinamica, che sfrutta un modello semplificato del sistema, ideato per lo scopo. Una volta che le prestazioni ottimali per ogni soluzione progettuale sono state determinate, è stato possibile effettuare un equo confronto fra le diverse alternative. Nella terza ed ultima applicazione è stato analizzato un impianto a ciclo Rankine organico (ORC) per il recupero di cascami termici dai gas di scarico di autovetture. Nonostante gli impianti ORC siano potenzialmente in grado di produrre rilevanti incrementi nel risparmio di combustibile di un veicolo, è necessario per il loro corretto funzionamento lo sviluppo di complesse strategie di controllo, che siano in grado di far fronte alla variabilità della fonte di calore per il processo; inoltre, contemporaneamente alla massimizzazione dei risparmi di combustibile, il sistema deve essere mantenuto in condizioni di funzionamento sicure. Per far fronte al problema, un robusto ed efficace modello dell’impianto è stato realizzato, basandosi sulla Moving Boundary Methodology, per la simulazione delle dinamiche di cambio di fase del fluido organico e la stima delle prestazioni dell’impianto. Tale modello è stato in seguito utilizzato per progettare un controllore predittivo (MPC) in grado di stimare i parametri di controllo ottimali per la gestione del sistema durante il funzionamento transitorio. Per la soluzione del corrispondente problema di ottimizzazione dinamica non lineare, un algoritmo basato sulla Particle Swarm Optimization è stato sviluppato. I risultati ottenuti con l’adozione di tale controllore sono stati confrontati con quelli ottenibili da un classico controllore proporzionale integrale (PI), mostrando nuovamente i vantaggi, da un punto di vista energetico, derivanti dall’adozione di una strategia di controllo ottima.
Resumo:
Copyright history has long been a subject of intense and contested enquiry. Historical narratives about the early development of copyright were first prominently mobilised in eighteenth century British legal discourse, during the so-called Battle of the Booksellers between Scottish and London publishers. The two landmark copyright decisions of that time – Millar v. Taylor (1769) and Donaldson v. Becket (1774) – continue to provoke debate today. The orthodox reading of Millar and Donaldson presents copyright as a natural proprietary right at common law inherent in authors. Revisionist accounts dispute that traditional analysis. These conflicting perspectives have, once again, become the subject of critical scrutiny with the publication of Copyright at Common Law in 1774 by Prof Tomas Gomez-Arostegui in 2014, in the Connecticut Law Review ((2014) 47 Conn. L. Rev. 1) and as a CREATe Working Paper (No. 2014/16, 3 November 2014).
Taking Prof Gomez-Arostegui’s extraordinary work in this area as a point of departure, Dr Elena Cooper and Professor Ronan Deazley (then both academics at CREATe) organised an event, held at the University of Glasgow on 26th and 27th March 2015, to consider the interplay between copyright history and contemporary copyright policy. Is Donaldson still relevant, and, if so, why? What justificatory goals are served by historical investigation, and what might be learned from the history of the history of copyright? Does the study of copyright history still have any currency within an evidence-based policy context that is increasingly preoccupied with economic impact analysis?
This paper provides a lasting record of these discussions, including an editorial introduction, written comments by each of the panelists and Prof. Gomez-Arostegui and an edited transcript of the Symposium debate.
Resumo:
This report presents the results of the largest study ever conducted into the law, policy and practice of primary school teachers’ reporting of child sexual abuse in New South Wales, Queensland and Western Australia. The study included the largest Australian survey of teachers about reporting sexual abuse, in both government and non-government schools (n=470). Our research has produced evidence-based findings to enhance law, policy and practice about teachers’ reporting of child sexual abuse. The major benefits of our findings and recommendations are to: • Show how the legislation in each State can be improved; • Show how the policies in government and non-government school sectors can be improved; and • Show how teacher training can be improved. These improvements can enhance the already valuable contribution that teachers are making to identify cases of child sexual abuse. Based on the findings of our research, this report proposes solutions to issues in seven key areas of law, policy and practice. These solutions are relevant for State Parliaments, government and non-government educational authorities, and child protection departments. The solutions in each State are practicable, low-cost, and align with current government policy approaches. Implementing these solutions will: • protect more children from sexual abuse; • save cost to governments and society; • develop a professional teacher workforce better equipped for their child protection role; and • protect government and school authorities from legal liability.
Resumo:
We examine the asset allocation, returns, and expenses of superannuation funds whose assets are mainly invested in default investment options between 2004 and 2012. A majority of these funds fail to earn returns commensurate with their strategic asset allocation policy. It appears that much of the variation of returns between the funds might be a result of their engaging in significant active management of assets. Our results indicate that returns from active management are negatively related to expenses. We also find strong evidence of economies of scale existing in these superannuation funds across different size categories.
Resumo:
Performance based planning (PBP) is purported to be a viable alternative to traditional zoning. The implementation of PBP ranges between pure approaches that rely on predetermined quantifiable performance standards to determine land use suitability, and hybrid approaches that rely on a mix of activity based zones in addition to prescriptive and subjective standards. Jurisdictions in the USA, Australia and New Zealand have attempted this type of land use regulation with varying degrees of success. Despite the adoption of PBP legislation in these jurisdictions, this paper argues that a lack of extensive evaluation means that PBP is not well understood and the purported advantages of this type of planning are rarely achieved in practice. Few empirical studies have attempted to examine how PBP has been implemented in practice. In Queensland, Australia, the Integrated Planning Act 1997 (IPA) operated as Queensland's principal planning legislation between March 1998 and December 2009. While the IPA did not explicitly use the term performance based planning, the Queensland's planning system is widely considered to be performance based in practice. Significantly, the IPA prevented Local Government from prohibiting development or use and the term zone was absent from the legislation. How plan-making would be advanced under the new planning regime was not clear, and as a consequence local governments produced a variety of different plan-making approaches to comply with the new legislative regime. In order to analyse this variation the research has developed a performance adoption spectrum to classify plans ranging between pure and hybrid perspectives of PBP. The spectrum compares how land use was regulated in seventeen IPA plans across Queensland. The research found that hybrid plans predominated, and that over time a greater reliance on risk adverse drafting approaches created a quasi-prohibition plan, the exact opposite of what was intended by the IPA. This paper concludes that the drafting of the IPA and absence of plan-making guidance contributed to lack of shared understanding about the intended direction of the new planning system and resulted in many administrative interpretations of the legislation. It was a planning direction that tried too hard to be different, and as a result created a perception of land use risk and uncertainty that caused a return to more prescriptive and inflexible plan-making methods.
Resumo:
This thesis studies the interest-rate policy of the ECB by estimating monetary policy rules using real-time data and central bank forecasts. The aim of the estimations is to try to characterize a decade of common monetary policy and to look at how different models perform at this task.The estimated rules include: contemporary Taylor rules, forward-looking Taylor rules, nonlinearrules and forecast-based rules. The nonlinear models allow for the possibility of zone-like preferences and an asymmetric response to key variables. The models therefore encompass the most popular sub-group of simple models used for policy analysis as well as the more unusual non-linear approach. In addition to the empirical work, this thesis also contains a more general discussion of monetary policy rules mostly from a New Keynesian perspective. This discussion includes an overview of some notable related studies, optimal policy, policy gradualism and several other related subjects. The regression estimations are performed with either least squares or the generalized method of moments depending on the requirements of the estimations. The estimations use data from both the Euro Area Real-Time Database and the central bank forecasts published in ECB Monthly Bulletins. These data sources represent some of the best data that is available for this kind of analysis. The main results of this thesis are that forward-looking behavior appears highly prevalent, but that standard forward-looking Taylor rules offer only ambivalent results with regard to inflation. Nonlinear models are shown to work, but on the other hand do not have a strong rationale over a simpler linear formulation. However, the forecasts appear to be highly useful in characterizing policy and may offer the most accurate depiction of a predominantly forward-looking central bank. In particular the inflation response appears much stronger while the output response becomes highly forward-looking as well.
Resumo:
We propose a simulation-based algorithm for computing the optimal pricing policy for a product under uncertain demand dynamics. We consider a parameterized stochastic differential equation (SDE) model for the uncertain demand dynamics of the product over the planning horizon. In particular, we consider a dynamic model that is an extension of the Bass model. The performance of our algorithm is compared to that of a myopic pricing policy and is shown to give better results. Two significant advantages with our algorithm are as follows: (a) it does not require information on the system model parameters if the SDE system state is known via either a simulation device or real data, and (b) as it works efficiently even for high-dimensional parameters, it uses the efficient smoothed functional gradient estimator.
Resumo:
We investigated the site response characteristics of Kachchh rift basin over the meizoseismal area of the 2001, Mw 7.6, Bhuj (NW India) earthquake using the spectral ratio of the horizontal and vertical components of ambient vibrations. Using the available knowledge on the regional geology of Kachchh and well documented ground responses from the earthquake, we evaluated the H/V curves pattern across sediment filled valleys and uplifted areas generally characterized by weathered sandstones. Although our HIV curves showed a largely fuzzy nature, we found that the hierarchical clustering method was useful for comparing large numbers of response curves and identifying the areas with similar responses. Broad and plateau shaped peaks of a cluster of curves within the valley region suggests the possibility of basin effects within valley. Fundamental resonance frequencies (f(0)) are found in the narrow range of 0.1-2.3 Hz and their spatial distribution demarcated the uplifted regions from the valleys. In contrary, low HIV peak amplitudes (A(0) = 2-4) were observed on the uplifted areas and varying values (2-9) were found within valleys. Compared to the amplification factors, the liquefaction indices (kg) were able to effectively indicate the areas which experienced severe liquefaction. The amplification ranges obtained in the current study were found to be comparable to those obtained from earthquake data for a limited number of seismic stations located on uplifted areas; however the values on the valley region may not reflect their true amplification potential due to basin effects. Our study highlights the practical usefulness as well as limitations of the HIV method to study complex geological settings as Kachchh. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Coordinated measurement of temperature, velocity and free surface oscillation were obtained by using the drop shaft facility for microgravity experiments of half floating zone convection. The ground-based studies gave transition from steady to oscillatory convection for multi-quantities measurement.
Resumo:
Executive Summary: Information found in this report covers the years 1986 through 2005. Mussel Watch began monitoring a suite of trace metals and organic contaminants such as DDT, PCBs and PAHs. Through time additional chemicals were added, and today approximately 140 analytes are monitored. The Mussel Watch Program is the longest running estuarine and coastal pollutant monitoring effort conducted in the United States that is national in scope each year. Hundreds of scientific journal articles and technical reports based on Mussel Watch data have been written; however, this report is the first that presents local, regional and national findings across all years in a Quick Reference format, suitable for use by policy makers, scientists, resource managers and the general public. Pollution often starts at the local scale where high concentrations point to a specific source of contamination, yet some contaminants such as PCBs are atmospherically transported across regional and national scales, resulting in contamination far from their origin. Findings presented here showed few national trends for trace metals and decreasing trends for most organic contaminants; however, a wide variety of trends, both increasing and decreasing, emerge at regional and local levels. For most organic contaminants, trends have resulted from state and federal regulation. The highest concentrations for both metal and organic contaminants are found near urban and industrial areas. In addition to monitoring throughout the nation’s coastal shores and Great Lakes, Mussel Watch samples are stored in a specimen bank so that trends can be determined retrospectively for new and emerging contaminants of concern. For example, there is heightened awareness of a group of flame retardants that are finding their way into the marine environment. These compounds, known as polybrominated diphenyl ethers (PBDEs), are now being studied using historic samples from the specimen bank and current samples to determine their spatial distribution. We will continue to use this kind of investigation to assess new contaminant threats. We hope you find this document to be valuable, and that you continue to look towards the Mussel Watch Program for information on the condition of your coastal waters. (PDF contains 118 pages)