61 resultados para Real Electricity Markets Data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation focuses on the practice of regulatory governance, throughout the study of the functioning of formally independent regulatory agencies (IRAs), with special attention to their de facto independence. The research goals are grounded on a "neo-positivist" (or "reconstructed positivist") position (Hawkesworth 1992; Radaelli 2000b; Sabatier 2000). This perspective starts from the ontological assumption that even if subjective perceptions are constitutive elements of political phenomena, a real world exists beyond any social construction and can, however imperfectly, become the object of scientific inquiry. Epistemologically, it follows that hypothetical-deductive theories with explanatory aims can be tested by employing a proper methodology and set of analytical techniques. It is thus possible to make scientific inferences and general conclusions to a certain extent, according to a Bayesian conception of knowledge, in order to update the prior scientific beliefs in the truth of the related hypotheses (Howson 1998), while acknowledging the fact that the conditions of truth are at least partially subjective and historically determined (Foucault 1988; Kuhn 1970). At the same time, a sceptical position is adopted towards the supposed disjunction between facts and values and the possibility of discovering abstract universal laws in social science. It has been observed that the current version of capitalism corresponds to the golden age of regulation, and that since the 1980s no government activity in OECD countries has grown faster than regulatory functions (Jacobs 1999). Following an apparent paradox, the ongoing dynamics of liberalisation, privatisation, decartelisation, internationalisation, and regional integration hardly led to the crumbling of the state, but instead promoted a wave of regulatory growth in the face of new risks and new opportunities (Vogel 1996). Accordingly, a new order of regulatory capitalism is rising, implying a new division of labour between state and society and entailing the expansion and intensification of regulation (Levi-Faur 2005). The previous order, relying on public ownership and public intervention and/or on sectoral self-regulation by private actors, is being replaced by a more formalised, expert-based, open, and independently regulated model of governance. Independent regulation agencies (IRAs), that is, formally independent administrative agencies with regulatory powers that benefit from public authority delegated from political decision makers, represent the main institutional feature of regulatory governance (Gilardi 2008). IRAs constitute a relatively new technology of regulation in western Europe, at least for certain domains, but they are increasingly widespread across countries and sectors. For instance, independent regulators have been set up for regulating very diverse issues, such as general competition, banking and finance, telecommunications, civil aviation, railway services, food safety, the pharmaceutical industry, electricity, environmental protection, and personal data privacy. Two attributes of IRAs deserve a special mention. On the one hand, they are formally separated from democratic institutions and elected politicians, thus raising normative and empirical concerns about their accountability and legitimacy. On the other hand, some hard questions about their role as political actors are still unaddressed, though, together with regulatory competencies, IRAs often accumulate executive, (quasi-)legislative, and adjudicatory functions, as well as about their performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Monitoring of T-cell responses in genital mucosa has remained a major challenge because of the absence of lymphoid aggregates and the low abundance of T cells. Here we have adapted to genital tissue a sensitive real-time reverse transcription-PCR (TaqMan) method to measure induction of gamma interferon (IFN-gamma) mRNA transcription after 3 h of antigen-specific activation of CD8 T cells. For this purpose, we vaccinated C57BL/6 mice subcutaneously with human papillomavirus type 16 L1 virus-like particles and monitored the induction of CD8 T cells specific to the L1(165-173) H-2D(b)-restricted epitope. Comparison of the responses induced in peripheral blood mononuclear cells and lymph nodes (LN) by L1-specific IFN-gamma enzyme-linked immunospot assay and TaqMan determination of the relative increase in L1-specific IFN-gamma mRNA induction normalized to the content of CD8b mRNA showed a significant correlation, despite the difference in the readouts. Most of the cervicovaginal tissues could be analyzed by the TaqMan method if normalization to glyceraldehyde-3-phosphate dehydrogenase mRNA was used and a significant L1-specific IFN-gamma induction was found in one-third of the immunized mice. This local response did not correlate with the immune responses measured in the periphery, with the exception of the sacral LN, an LN draining the genital mucosa, where a significant correlation was found. Our data show that the TaqMan method is sensitive enough to detect antigen-specific CD8 T-cell responses in the genital mucosa of individual mice, and this may contribute to elaborate effective vaccines against genital pathogens.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Gene expression analysis has emerged as a major biological research area, with real-time quantitative reverse transcription PCR (RT-QPCR) being one of the most accurate and widely used techniques for expression profiling of selected genes. In order to obtain results that are comparable across assays, a stable normalization strategy is required. In general, the normalization of PCR measurements between different samples uses one to several control genes (e. g. housekeeping genes), from which a baseline reference level is constructed. Thus, the choice of the control genes is of utmost importance, yet there is not a generally accepted standard technique for screening a large number of candidates and identifying the best ones. Results: We propose a novel approach for scoring and ranking candidate genes for their suitability as control genes. Our approach relies on publicly available microarray data and allows the combination of multiple data sets originating from different platforms and/or representing different pathologies. The use of microarray data allows the screening of tens of thousands of genes, producing very comprehensive lists of candidates. We also provide two lists of candidate control genes: one which is breast cancer-specific and one with more general applicability. Two genes from the breast cancer list which had not been previously used as control genes are identified and validated by RT-QPCR. Open source R functions are available at http://www.isrec.isb-sib.ch/similar to vpopovic/research/ Conclusion: We proposed a new method for identifying candidate control genes for RT-QPCR which was able to rank thousands of genes according to some predefined suitability criteria and we applied it to the case of breast cancer. We also empirically showed that translating the results from microarray to PCR platform was achievable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural selection is typically exerted at some specific life stages. If natural selection takes place before a trait can be measured, using conventional models can cause wrong inference about population parameters. When the missing data process relates to the trait of interest, a valid inference requires explicit modeling of the missing process. We propose a joint modeling approach, a shared parameter model, to account for nonrandom missing data. It consists of an animal model for the phenotypic data and a logistic model for the missing process, linked by the additive genetic effects. A Bayesian approach is taken and inference is made using integrated nested Laplace approximations. From a simulation study we find that wrongly assuming that missing data are missing at random can result in severely biased estimates of additive genetic variance. Using real data from a wild population of Swiss barn owls Tyto alba, our model indicates that the missing individuals would display large black spots; and we conclude that genes affecting this trait are already under selection before it is expressed. Our model is a tool to correctly estimate the magnitude of both natural selection and additive genetic variance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Aspergillus species are the main pathogens causing invasive fungal infections but the prevalence of other mould species is rising. Resistance to antifungals among these new emerging pathogens presents a challenge for managing of infections. Conventional susceptibility testing of non-Aspergillus species is laborious and often difficult to interpret. We evaluated a new method for real-time susceptibility testing of moulds based on their of growth-related heat production.Methods: Laboratory and clinical strains of Mucor spp. (n = 4), Scedoporium spp. (n = 4) and Fusarium spp. (n = 5) were used. Conventional MIC was determined by microbroth dilution. Isothermal microcalorimetry was performed at 37 C using Sabouraud dextrose broth (SDB) inoculated with 104 spores/ml (determined by microscopical enumeration). SDB without antifungals was used for evaluation of growth characteristics. Detection time was defined as heat flow exceeding 10 lW. For susceptibility testing serial dilutions of amphotericin B, voriconazole, posaconazole and caspofungin were used. The minimal heat inhibitory concentration (MHIC) was defined as the lowest antifungal concentration, inhbiting 50% of the heat produced by the growth control at 48 h or at 24 h for Mucor spp. Susceptibility tests were performed in duplicate.Results: Tested mould genera had distinctive heat flow profiles with a median detection time (range) of 3.4 h (1.9-4.1 h) for Mucor spp, 11.0 h (7.1-13.7 h) for Fusarium spp and 29.3 h (27.4-33.0 h) for Scedosporium spp. Graph shows heat flow (in duplicate) of one representative strain from each genus (dashed line marks detection limit). Species belonging to the same genus showed similar heat production profiles. Table shows MHIC and MIC ranges for tested moulds and antifungals.Conclusions: Microcalorimetry allowed rapid detection of growth of slow-growing species, such as Fusarium spp. and Scedosporium spp. Moreover, microcalorimetry offers a new approach for antifungal susceptibility testing of moulds, correlating with conventional MIC values. Interpretation of calorimetric susceptibility data is easy and real-time data on the effect of different antifungals on the growth of the moulds is additionally obtained. This method may be used for investigation of different mechanisms of action of antifungals, new substances and drug-drug combinations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis consists of four essays in equilibrium asset pricing. The main topic is investors' heterogeneity: I investigates the equilibrium implications for the financial markets when investors have different attitudes toward risk. The first chapter studies why expected risk and remuneration on the aggregate market are negatively related, even if intuition and standard theory suggest a positive relation. I show that the negative trade-off can obtain in equilibrium if investors' beliefs about economic fundamentals are procyclically biased and the market Sharpe ratio is countercyclical. I verify that such conditions hold in the real markets and I find empirical support for the risk-return dynamics predicted by the model. The second chapter consists of two essays. The first essay studies how het¬erogeneity in risk preferences interacts with other sources of heterogeneity and how this affects asset prices in equilibrium. Using perceived macroeconomic un¬certainty as source of heterogeneity, the model helps to explain some patterns of financial returns, even if heterogeneity is small as suggested by survey data. The second essay determines conditions such that equilibrium prices have analytical solutions when investors have heterogeneous risk attitudes and macroeconomic fundamentals feature latent uncertainty. This approach provides additional in-sights to the previous literature where models require numerical solutions. The third chapter studies why equity claims (i.e. assets paying a single future dividend) feature premia and risk decreasing with the horizon, even if standard models imply the opposite shape. I show that labor relations helps to explain the puzzle. When workers have bargaining power to exploit partial income insurance within the firm, wages are smoother and dividends are riskier than in a standard economy. Distributional risk among workers and shareholders provides a rationale to the equity short-term risk, which leads to downward sloping term structures of premia and risk for equity claim. Résumé Cette thèse se compose de quatre essais dans l'évaluation des actifs d'équilibre. Le sujet principal est l'hétérogénéité des investisseurs: J'étudie les implications d'équilibre pour les marchés financiers où les investisseurs ont des attitudes différentes face au risque. Le première chapitre étudie pourquoi attendus risque et la rémunération sur le marché global sont liées négativement, même si l'intuition et la théorie standard suggèrent une relation positive. Je montre que le compromis négatif peut obtenir en équilibre si les croyances des investisseurs sur les fondamentaux économiques sont procyclique biaisées et le ratio de Sharpe du marché est anticyclique. Je vérifier que ces conditions sont réalisées dans les marchés réels et je trouve un appui empirique à la dynamique risque-rendement prédites par le modèle. Le deuxième chapitre se compose de deux essais. Le première essai étudie com¬ment hétérogénéité dans les préférences de risque inter agit avec d'autres sources d'hétérogénéité et comment cela affecte les prix des actifs en équilibre. Utili¬sation de l'incertitude macroéconomique perù comme source d'hétérogénéité, le modèle permet d'expliquer certaines tendances de rendements financiers, même si l'hétérogénéité est faible comme suggéré par les données d'enquête. Le deuxième essai détermine des conditions telles que les prix d'équilibre disposer de solutions analytiques lorsque les investisseurs ont des attitudes des risques hétérogènes et les fondamentaux macroéconomiques disposent d'incertitude latente. Cette approche fournit un éclairage supplémentaire à la littérature antérieure où les modèles nécessitent des solutions numériques. Le troisième chapitre étudie pourquoi les equity-claims (actifs que paient un seul dividende futur) ont les primes et le risque décroissante avec l'horizon, mme si les modèles standards impliquent la forme opposée. Je montre que les relations de travail contribue à expliquer l'énigme. Lorsque les travailleurs ont le pouvoir de négociation d'exploiter assurance revenu partiel dans l'entreprise, les salaires sont plus lisses et les dividendes sont plus risqués que dans une économie standard. Risque de répartition entre les travailleurs et les actionnaires fournit une justification à le risque à court terme, ce qui conduit à des term-structures en pente descendante des primes et des risques pour les equity-claims.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examined the validity and reliability of a sequential "Run-Bike-Run" test (RBR) in age-group triathletes. Eight Olympic distance (OD) specialists (age 30.0 ± 2.0 years, mass 75.6 ± 1.6 kg, run VO2max 63.8 ± 1.9 ml· kg(-1)· min(-1), cycle VO2peak 56.7 ± 5.1 ml· kg(-1)· min(-1)) performed four trials over 10 days. Trial 1 (TRVO2max) was an incremental treadmill running test. Trials 2 and 3 (RBR1 and RBR2) involved: 1) a 7-min run at 15 km· h(-1) (R1) plus a 1-min transition to 2) cycling to fatigue (2 W· kg(-1) body mass then 30 W each 3 min); 3) 10-min cycling at 3 W· kg(-1) (Bsubmax); another 1-min transition and 4) a second 7-min run at 15 km· h(-1) (R2). Trial 4 (TT) was a 30-min cycle - 20-min run time trial. No significant differences in absolute oxygen uptake (VO2), heart rate (HR), or blood lactate concentration ([BLA]) were evidenced between RBR1 and RBR2. For all measured physiological variables, the limits of agreement were similar, and the mean differences were physiologically unimportant, between trials. Low levels of test-retest error (i.e. ICC <0.8, CV<10%) were observed for most (logged) measurements. However [BLA] post R1 (ICC 0.87, CV 25.1%), [BLA] post Bsubmax (ICC 0.99, CV 16.31) and [BLA] post R2 (ICC 0.51, CV 22.9%) were least reliable. These error ranges may help coaches detect real changes in training status over time. Moreover, RBR test variables can be used to predict discipline specific and overall TT performance. Cycle VO2peak, cycle peak power output, and the change between R1 and R2 (deltaR1R2) in [BLA] were most highly related to overall TT distance (r = 0.89, p < 0. 01; r = 0.94, p < 0.02; r = 0.86, p < 0.05, respectively). The percentage of TR VO2max at 15 km· h(-1), and deltaR1R2 HR, were also related to run TT distance (r = -0.83 and 0.86, both p < 0.05).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analyzing functional data often leads to finding common factors, for which functional principal component analysis proves to be a useful tool to summarize and characterize the random variation in a function space. The representation in terms of eigenfunctions is optimal in the sense of L-2 approximation. However, the eigenfunctions are not always directed towards an interesting and interpretable direction in the context of functional data and thus could obscure the underlying structure. To overcome such difficulty, an alternative to functional principal component analysis is proposed that produces directed components which may be more informative and easier to interpret. These structural components are similar to principal components, but are adapted to situations in which the domain of the function may be decomposed into disjoint intervals such that there is effectively independence between intervals and positive correlation within intervals. The approach is demonstrated with synthetic examples as well as real data. Properties for special cases are also studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Multiple logistic regression is precluded from many practical applications in ecology that aim to predict the geographic distributions of species because it requires absence data, which are rarely available or are unreliable. In order to use multiple logistic regression, many studies have simulated "pseudo-absences" through a number of strategies, but it is unknown how the choice of strategy influences models and their geographic predictions of species. In this paper we evaluate the effect of several prevailing pseudo-absence strategies on the predictions of the geographic distribution of a virtual species whose "true" distribution and relationship to three environmental predictors was predefined. We evaluated the effect of using a) real absences b) pseudo-absences selected randomly from the background and c) two-step approaches: pseudo-absences selected from low suitability areas predicted by either Ecological Niche Factor Analysis: (ENFA) or BIOCLIM. We compared how the choice of pseudo-absence strategy affected model fit, predictive power, and information-theoretic model selection results. Results Models built with true absences had the best predictive power, best discriminatory power, and the "true" model (the one that contained the correct predictors) was supported by the data according to AIC, as expected. Models based on random pseudo-absences had among the lowest fit, but yielded the second highest AUC value (0.97), and the "true" model was also supported by the data. Models based on two-step approaches had intermediate fit, the lowest predictive power, and the "true" model was not supported by the data. Conclusion If ecologists wish to build parsimonious GLM models that will allow them to make robust predictions, a reasonable approach is to use a large number of randomly selected pseudo-absences, and perform model selection based on an information theoretic approach. However, the resulting models can be expected to have limited fit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Physical activity (PA) and related energy expenditure (EE) is often assessed by means of a single technique. Because of inherent limitations, single techniques may not allow for an accurate assessment both PA and related EE. The aim of this study was to develop a model to accurately assess common PA types and durations and thus EE in free-living conditions, combining data from global positioning system (GPS) and 2 accelerometers. Methods: Forty-one volunteers participated in the study. First, a model was developed and adjusted to measured EE with a first group of subjects (Protocol I, n = 12) who performed 6 structured and supervised PA. Then, the model was validated over 2 experimental phases with 2 groups (n = 12 and n = 17) performing scheduled (Protocol I) and spontaneous common activities in real-life condition (Protocol II). Predicted EE was compared with actual EE as measured by portable indirect calorimetry. Results: In protocol I, performed PA types could be recognized with little error. The duration of each PA type could be predicted with an accuracy below 1 minute. Measured and predicted EE were strongly associated (r = .97, P < .001). Conclusion: Combining GPS and 2 accelerometers allows for an accurate assessment of PA and EE in free-living situations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evaluation of segmentation methods is a crucial aspect in image processing, especially in the medical imaging field, where small differences between segmented regions in the anatomy can be of paramount importance. Usually, segmentation evaluation is based on a measure that depends on the number of segmented voxels inside and outside of some reference regions that are called gold standards. Although some other measures have been also used, in this work we propose a set of new similarity measures, based on different features, such as the location and intensity values of the misclassified voxels, and the connectivity and the boundaries of the segmented data. Using the multidimensional information provided by these measures, we propose a new evaluation method whose results are visualized applying a Principal Component Analysis of the data, obtaining a simplified graphical method to compare different segmentation results. We have carried out an intensive study using several classic segmentation methods applied to a set of MRI simulated data of the brain with several noise and RF inhomogeneity levels, and also to real data, showing that the new measures proposed here and the results that we have obtained from the multidimensional evaluation, improve the robustness of the evaluation and provides better understanding about the difference between segmentation methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are various methods to collect adverse events (AEs) in clinical trials. The methods how AEs are collected in vaccine trials is of special interest: solicited reporting can lead to over-reporting events that have little or no biological relationship to the vaccine. We assessed the rate of AEs listed in the package insert for the virosomal hepatitis A vaccine Epaxal(®), comparing data collected by solicited or unsolicited self-reporting. In an open, multi-centre post-marketing study, 2675 healthy travellers received single doses of vaccine administered intramuscularly. AEs were recorded based on solicited and unsolicited questioning during a four-day period after vaccination. A total of 2541 questionnaires could be evaluated (95.0% return rate). Solicited self-reporting resulted in significantly higher (p<0.0001) rates of subjects with AEs than unsolicited reporting, both at baseline (18.9% solicited versus 2.1% unsolicited systemic AEs) and following immunization (29.6% versus 19.3% local AEs; 33.8% versus 18.2% systemic AEs). This could indicate that actual reporting rates of AEs with Epaxal(®) may be substantially lower than described in the package insert. The distribution of AEs differed significantly between the applied methods of collecting AEs. The most common AEs listed in the package insert were reported almost exclusively with solicited questioning. The reporting of local AEs was more likely than that of systemic AEs to be influenced by subjects' sex, age and study centre. Women reported higher rates of AEs than men. The results highlight the need for detailing the methods how vaccine tolerability was reported and assessed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over thirty years ago, Leamer (1983) - among many others - expressed doubts about the quality and usefulness of empirical analyses for the economic profession by stating that "hardly anyone takes data analyses seriously. Or perhaps more accurately, hardly anyone takes anyone else's data analyses seriously" (p.37). Improvements in data quality, more robust estimation methods and the evolution of better research designs seem to make that assertion no longer justifiable (see Angrist and Pischke (2010) for a recent response to Leamer's essay). The economic profes- sion and policy makers alike often rely on empirical evidence as a means to investigate policy relevant questions. The approach of using scientifically rigorous and systematic evidence to identify policies and programs that are capable of improving policy-relevant outcomes is known under the increasingly popular notion of evidence-based policy. Evidence-based economic policy often relies on randomized or quasi-natural experiments in order to identify causal effects of policies. These can require relatively strong assumptions or raise concerns of external validity. In the context of this thesis, potential concerns are for example endogeneity of policy reforms with respect to the business cycle in the first chapter, the trade-off between precision and bias in the regression-discontinuity setting in chapter 2 or non-representativeness of the sample due to self-selection in chapter 3. While the identification strategies are very useful to gain insights into the causal effects of specific policy questions, transforming the evidence into concrete policy conclusions can be challenging. Policy develop- ment should therefore rely on the systematic evidence of a whole body of research on a specific policy question rather than on a single analysis. In this sense, this thesis cannot and should not be viewed as a comprehensive analysis of specific policy issues but rather as a first step towards a better understanding of certain aspects of a policy question. The thesis applies new and innovative identification strategies to policy-relevant and topical questions in the fields of labor economics and behavioral environmental economics. Each chapter relies on a different identification strategy. In the first chapter, we employ a difference- in-differences approach to exploit the quasi-experimental change in the entitlement of the max- imum unemployment benefit duration to identify the medium-run effects of reduced benefit durations on post-unemployment outcomes. Shortening benefit duration carries a double- dividend: It generates fiscal benefits without deteriorating the quality of job-matches. On the contrary, shortened benefit durations improve medium-run earnings and employment possibly through containing the negative effects of skill depreciation or stigmatization. While the first chapter provides only indirect evidence on the underlying behavioral channels, in the second chapter I develop a novel approach that allows to learn about the relative impor- tance of the two key margins of job search - reservation wage choice and search effort. In the framework of a standard non-stationary job search model, I show how the exit rate from un- employment can be decomposed in a way that is informative on reservation wage movements over the unemployment spell. The empirical analysis relies on a sharp discontinuity in unem- ployment benefit entitlement, which can be exploited in a regression-discontinuity approach to identify the effects of extended benefit durations on unemployment and survivor functions. I find evidence that calls for an important role of reservation wage choices for job search be- havior. This can have direct implications for the optimal design of unemployment insurance policies. The third chapter - while thematically detached from the other chapters - addresses one of the major policy challenges of the 21st century: climate change and resource consumption. Many governments have recently put energy efficiency on top of their agendas. While pricing instru- ments aimed at regulating the energy demand have often been found to be short-lived and difficult to enforce politically, the focus of energy conservation programs has shifted towards behavioral approaches - such as provision of information or social norm feedback. The third chapter describes a randomized controlled field experiment in which we discuss the effective- ness of different types of feedback on residential electricity consumption. We find that detailed and real-time feedback caused persistent electricity reductions on the order of 3 to 5 % of daily electricity consumption. Also social norm information can generate substantial electricity sav- ings when designed appropriately. The findings suggest that behavioral approaches constitute effective and relatively cheap way of improving residential energy-efficiency.