960 resultados para Real Electricity Markets Data
Resumo:
A traditional photonic-force microscope (PFM) results in huge sets of data, which requires tedious numerical analysis. In this paper, we propose instead an analog signal processor to attain real-time capabilities while retaining the richness of the traditional PFM data. Our system is devoted to intracellular measurements and is fully interactive through the use of a haptic joystick. Using our specialized analog hardware along with a dedicated algorithm, we can extract the full 3D stiffness matrix of the optical trap in real time, including the off-diagonal cross-terms. Our system is also capable of simultaneously recording data for subsequent offline analysis. This allows us to check that a good correlation exists between the classical analysis of stiffness and our real-time measurements. We monitor the PFM beads using an optical microscope. The force-feedback mechanism of the haptic joystick helps us in interactively guiding the bead inside living cells and collecting information from its (possibly anisotropic) environment. The instantaneous stiffness measurements are also displayed in real time on a graphical user interface. The whole system has been built and is operational; here we present early results that confirm the consistency of the real-time measurements with offline computations.
Resumo:
We analyze the impact of trade liberalization, removal of production subsidies, and elimination of consumption distortions in world sugar markets using a partial-equilibrium international sugar model calibrated on 2002 market data and current policies. The removal of trade distortions alone induces a 27% price increase while the removal of all trade and production distortions induces a 48% increase by 2011/12 relative to the baseline. Aggregate trade expands moderately, but location of production and trade patterns change substantially. Protectionist OECD countries (the EU, Japan, the US) experience an import expansion or export reduction and significant contraction in production in unfettered markets. Competitive producers in both OECD countries (Australia) and non-OECD countries (Brazil, Cuba), and even some protected producers (Indonesia, Turkey), expand production when all distortions are removed. Consumption distortions have marginal impacts on world markets and location of production. We discuss the significance of these results in the context of mounting pressures to increase market access in highly protected OECD countries and the impact on non-OECD countries.
Resumo:
Résumé : Un nombre croissant de cas de malaria chez les voyageurs et migrants a été rapporté. Bien que l'analyse microscopique des frottis sanguins reste traditionnellement l'outil diagnostic de référence, sa fiabilité dépend considérablement de l'expertise de l'examinateur, pouvant elle-même faire défaut sous nos latitudes. Une PCR multiplex en temps réel a donc été développée en vue d'une standardisation du diagnostic. Un ensemble d'amorces génériques ciblant une région hautement conservée du gène d'ARN ribosomial 18S du genre Plasmodium a tout d'abord été conçu, dont le polymorphisme du produit d'amplification semblait suffisant pour créer quatre sondes spécifiques à l'espèce P. falciparum, P. malariae, P. vivax et P. ovale. Ces sondes utilisées en PCR en temps réel se sont révélées capables de détecter une seule copie de plasmide de P. falciparum, P. malariae, P. vivax et P. ovale spécifiquement. La même sensibilité a été obtenue avec une sonde de screening pouvant détecter les quatre espèces. Quatre-vingt-dix-sept échantillons de sang ont ensuite été testés, dont on a comparé la microscopie et la PCR en temps réel pour 66 (60 patients) d'entre eux. Ces deux méthodes ont montré une concordance globale de 86% pour la détection de plasmodia. Les résultats discordants ont été réévalués grâce à des données cliniques, une deuxième expertise microscopique et moléculaire (laboratoire de Genève et de l'Institut Suisse Tropical de Bâle), ainsi qu'à l'aide du séquençage. Cette nouvelle analyse s'est prononcé en faveur de la méthode moléculaire pour tous les neuf résultats discordants. Sur les 31 résultats positifs par les deux méthodes, la même réévaluation a pu donner raison 8 fois sur 9 à la PCR en temps réel sur le plan de l'identification de l'espèce plasmodiale. Les 31 autres échantillons ont été analysés pour le suivi de sept patients sous traitement antimalarique. Il a été observé une baisse rapide du nombre de parasites mesurée par la PCR en temps réel chez six des sept patients, baisse correspondant à la parasitémie déterminée microscopiquement. Ceci suggère ainsi le rôle potentiel de la PCR en temps réel dans le suivi thérapeutique des patients traités par antipaludéens. Abstract : There have been reports of increasing numbers of cases of malaria among migrants and travelers. Although microscopic examination of blood smears remains the "gold standard" in diagnosis, this method suffers from insufficient sensitivity and requires considerable expertise. To improve diagnosis, a multiplex real-time PCR was developed. One set of generic primers targeting a highly conserved region of the 18S rRNA gene of the genus Plasmodium was designed; the primer set was polymorphic enough internally to design four species-specific probes for P. falciparum, P. vivax, P. malarie, and P. ovale. Real-time PCR with species-specific probes detected one plasmid copy of P. falciparum, P. vivax, P. malariae, and P. ovale specifically. The same sensitivity was achieved for all species with real-time PCR with the 18S screening probe. Ninety-seven blood samples were investigated. For 66 of them (60 patients), microscopy and real-time PCR results were compared and had a crude agreement of 86% for the detection of plasmodia. Discordant results were reevaluated with clinical, molecular, and sequencing data to resolve them. All nine discordances between 18S screening PCR and microscopy were resolved in favor of the molecular method, as were eight of nine discordances at the species level for the species-specific PCR among the 31 samples positive by both methods. The other 31 blood samples were tested to monitor the antimalaria treatment in seven patients. The number of parasites measured by real-time PCR fell rapidly for six out of seven patients in parallel to parasitemia determined microscopically. This suggests a role of quantitative PCR for the monitoring of patients receiving antimalaria therapy.
Resumo:
AbstractFor a wide range of environmental, hydrological, and engineering applications there is a fast growing need for high-resolution imaging. In this context, waveform tomographic imaging of crosshole georadar data is a powerful method able to provide images of pertinent electrical properties in near-surface environments with unprecedented spatial resolution. In contrast, conventional ray-based tomographic methods, which consider only a very limited part of the recorded signal (first-arrival traveltimes and maximum first-cycle amplitudes), suffer from inherent limitations in resolution and may prove to be inadequate in complex environments. For a typical crosshole georadar survey the potential improvement in resolution when using waveform-based approaches instead of ray-based approaches is in the range of one order-of- magnitude. Moreover, the spatial resolution of waveform-based inversions is comparable to that of common logging methods. While in exploration seismology waveform tomographic imaging has become well established over the past two decades, it is comparably still underdeveloped in the georadar domain despite corresponding needs. Recently, different groups have presented finite-difference time-domain waveform inversion schemes for crosshole georadar data, which are adaptations and extensions of Tarantola's seminal nonlinear generalized least-squares approach developed for the seismic case. First applications of these new crosshole georadar waveform inversion schemes on synthetic and field data have shown promising results. However, there is little known about the limits and performance of such schemes in complex environments. To this end, the general motivation of my thesis is the evaluation of the robustness and limitations of waveform inversion algorithms for crosshole georadar data in order to apply such schemes to a wide range of real world problems.One crucial issue to making applicable and effective any waveform scheme to real-world crosshole georadar problems is the accurate estimation of the source wavelet, which is unknown in reality. Waveform inversion schemes for crosshole georadar data require forward simulations of the wavefield in order to iteratively solve the inverse problem. Therefore, accurate knowledge of the source wavelet is critically important for successful application of such schemes. Relatively small differences in the estimated source wavelet shape can lead to large differences in the resulting tomograms. In the first part of my thesis, I explore the viability and robustness of a relatively simple iterative deconvolution technique that incorporates the estimation of the source wavelet into the waveform inversion procedure rather than adding additional model parameters into the inversion problem. Extensive tests indicate that this source wavelet estimation technique is simple yet effective, and is able to provide remarkably accurate and robust estimates of the source wavelet in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity as well as significant ambient noise in the recorded data. Furthermore, our tests also indicate that the approach is insensitive to the phase characteristics of the starting wavelet, which is not the case when directly incorporating the wavelet estimation into the inverse problem.Another critical issue with crosshole georadar waveform inversion schemes which clearly needs to be investigated is the consequence of the common assumption of frequency- independent electromagnetic constitutive parameters. This is crucial since in reality, these parameters are known to be frequency-dependent and complex and thus recorded georadar data may show significant dispersive behaviour. In particular, in the presence of water, there is a wide body of evidence showing that the dielectric permittivity can be significantly frequency dependent over the GPR frequency range, due to a variety of relaxation processes. The second part of my thesis is therefore dedicated to the evaluation of the reconstruction limits of a non-dispersive crosshole georadar waveform inversion scheme in the presence of varying degrees of dielectric dispersion. I show that the inversion algorithm, combined with the iterative deconvolution-based source wavelet estimation procedure that is partially able to account for the frequency-dependent effects through an "effective" wavelet, performs remarkably well in weakly to moderately dispersive environments and has the ability to provide adequate tomographic reconstructions.
Resumo:
This paper presents a review of methodology for semi-supervised modeling with kernel methods, when the manifold assumption is guaranteed to be satisfied. It concerns environmental data modeling on natural manifolds, such as complex topographies of the mountainous regions, where environmental processes are highly influenced by the relief. These relations, possibly regionalized and nonlinear, can be modeled from data with machine learning using the digital elevation models in semi-supervised kernel methods. The range of the tools and methodological issues discussed in the study includes feature selection and semisupervised Support Vector algorithms. The real case study devoted to data-driven modeling of meteorological fields illustrates the discussed approach.
Resumo:
BACKGROUND: The reverse transcription quantitative real-time polymerase chain reaction (RT-qPCR) is a widely used, highly sensitive laboratory technique to rapidly and easily detect, identify and quantify gene expression. Reliable RT-qPCR data necessitates accurate normalization with validated control genes (reference genes) whose expression is constant in all studied conditions. This stability has to be demonstrated.We performed a literature search for studies using quantitative or semi-quantitative PCR in the rat spared nerve injury (SNI) model of neuropathic pain to verify whether any reference genes had previously been validated. We then analyzed the stability over time of 7 commonly used reference genes in the nervous system - specifically in the spinal cord dorsal horn and the dorsal root ganglion (DRG). These were: Actin beta (Actb), Glyceraldehyde-3-phosphate dehydrogenase (GAPDH), ribosomal proteins 18S (18S), L13a (RPL13a) and L29 (RPL29), hypoxanthine phosphoribosyltransferase 1 (HPRT1) and hydroxymethylbilane synthase (HMBS). We compared the candidate genes and established a stability ranking using the geNorm algorithm. Finally, we assessed the number of reference genes necessary for accurate normalization in this neuropathic pain model. RESULTS: We found GAPDH, HMBS, Actb, HPRT1 and 18S cited as reference genes in literature on studies using the SNI model. Only HPRT1 and 18S had been once previously demonstrated as stable in RT-qPCR arrays. All the genes tested in this study, using the geNorm algorithm, presented gene stability values (M-value) acceptable enough for them to qualify as potential reference genes in both DRG and spinal cord. Using the coefficient of variation, 18S failed the 50% cut-off with a value of 61% in the DRG. The two most stable genes in the dorsal horn were RPL29 and RPL13a; in the DRG they were HPRT1 and Actb. Using a 0.15 cut-off for pairwise variations we found that any pair of stable reference gene was sufficient for the normalization process. CONCLUSIONS: In the rat SNI model, we validated and ranked Actb, RPL29, RPL13a, HMBS, GAPDH, HPRT1 and 18S as good reference genes in the spinal cord. In the DRG, 18S did not fulfill stability criteria. The combination of any two stable reference genes was sufficient to provide an accurate normalization.
Resumo:
One of the disadvantages of old age is that there is more past than future: this,however, may be turned into an advantage if the wealth of experience and, hopefully,wisdom gained in the past can be reflected upon and throw some light on possiblefuture trends. To an extent, then, this talk is necessarily personal, certainly nostalgic,but also self critical and inquisitive about our understanding of the discipline ofstatistics. A number of almost philosophical themes will run through the talk: searchfor appropriate modelling in relation to the real problem envisaged, emphasis onsensible balances between simplicity and complexity, the relative roles of theory andpractice, the nature of communication of inferential ideas to the statistical layman, theinter-related roles of teaching, consultation and research. A list of keywords might be:identification of sample space and its mathematical structure, choices betweentransform and stay, the role of parametric modelling, the role of a sample spacemetric, the underused hypothesis lattice, the nature of compositional change,particularly in relation to the modelling of processes. While the main theme will berelevance to compositional data analysis we shall point to substantial implications forgeneral multivariate analysis arising from experience of the development ofcompositional data analysis…
Resumo:
We study the quantitative properties of a dynamic general equilibrium model in which agents face both idiosyncratic and aggregate income risk, state-dependent borrowing constraints that bind in some but not all periods and markets are incomplete. Optimal individual consumption-savings plans and equilibrium asset prices are computed under various assumptions about income uncertainty. Then we investigate whether our general equilibrium model with incomplete markets replicates two empirical observations: the high correlation between individual consumption and individual income, and the equity premium puzzle. We find that, when the driving processes are calibrated according to the data from wage income in different sectors of the US economy, the results move in the direction of explaining these observations, but the model falls short of explaining the observed correlations quantitatively. If the incomes of agents are assumed independent of each other, the observations can be explained quantitatively.
Resumo:
It is common in econometric applications that several hypothesis tests arecarried out at the same time. The problem then becomes how to decide whichhypotheses to reject, accounting for the multitude of tests. In this paper,we suggest a stepwise multiple testing procedure which asymptoticallycontrols the familywise error rate at a desired level. Compared to relatedsingle-step methods, our procedure is more powerful in the sense that itoften will reject more false hypotheses. In addition, we advocate the useof studentization when it is feasible. Unlike some stepwise methods, ourmethod implicitly captures the joint dependence structure of the teststatistics, which results in increased ability to detect alternativehypotheses. We prove our method asymptotically controls the familywise errorrate under minimal assumptions. We present our methodology in the context ofcomparing several strategies to a common benchmark and deciding whichstrategies actually beat the benchmark. However, our ideas can easily beextended and/or modied to other contexts, such as making inference for theindividual regression coecients in a multiple regression framework. Somesimulation studies show the improvements of our methods over previous proposals. We also provide an application to a set of real data.
Resumo:
This paper tests for the market environment within which US fiscal policyoperates, that is we test for the incompleteness of the US government bondmarket. We document the stochastic properties of US debt and deficits andthen consider the ability of competing optimal tax models to account forthis behaviour. We show that when a government pursues an optimal taxpolicy and issues a full set of contingent claims, the value of debthas the same or less persistence than other variables in the economyand declines in response to higher deficit shocks. By contrast, ifgovernments only issue one-period risk free bonds (incomplete markets),debt shows more persistence than other variables and it increases inresponse to expenditure shocks. Maintaining the hypothesis of Ramseybehavior, US data conflicts.
Resumo:
This paper provides empirical evidence on the explanatory factorsaffecting introductory prices of new pharmaceuticals in a heavilyregulated and highly subsidized market. We collect a data setconsisting of all new chemical entities launched in Spain between1997 and 2005, and model launching prices. We found that, unlike inthe US and Sweden, therapeutically "innovative" products are notoverpriced relative to "imitative" ones. Price setting is mainly used asa mechanism to adjust for inflation independently of the degree ofinnovation. The drugs that enter through the centralized EMAapproval procedure are overpriced, which may be a consequence ofmarket globalization and international price setting.
Resumo:
In this paper I analyze the effects of insider trading on real investmentand the insurance role of financial markets. There is a single entrepreneurwho, at a first stage, chooses the level of investment in a risky business.At the second stage, an asset with random payoff is issued and then the entrepreneurreceives some privileged information on the likely realization of productionreturn. At the third stage, trading occurs on the asset market, where theentrepreneur faces the aggregate demand coming from a continuum of rationaluniformed traders and some noise traders. I compare the equilibrium withinsider trading (when the entrepreneur trades on her inside information in theasset market) with the equilibrium in the same market without insider trading. Ifind that permitting insider trading tends to decrease the level of realinvestment. Moreover, the asset market is thinner and the entrepreneur's netsupply of the asset and the hedge ratio are lower, although the asset priceis more informative and volatile.
Resumo:
We study whether people's preferences in an unbalanced market are affected by whether they are on the excess supply side or the excess demand side of the market. Our analysis is based on the comparison of behavior between two types of experimental gift exchange markets, which vary only with respect to whether first or second movers are on the long side of the market. The direction of market imbalance could influence subjects' motivation, as second movers, workers, might react differently to favorable actions by first movers, firms, in the two cases. Our data show strong deviations from the standard game-theoretic prediction. However, we only find secondary treatment effects. First movers are not more generous when they are in excess supply and second movers do not respond less favorably when they are in excess demand. Competition has only minor psychological effects in our data.
Resumo:
Most central banks perceive a trade-off between stabilizing inflation and stabilizing the gap between output and desired output. However, the standard new Keynesian framework implies no such trade-off. In that framework, stabilizing inflation is equivalent to stabilizing the welfare-relevant output gap. In this paper, we argue that this property of the new Keynesian framework, which we call the divine coincidence, is due to a special feature of the model: the absence of non trivial real imperfections.We focus on one such real imperfection, namely, real wage rigidities. When the baseline new Keynesian model is extended to allow for real wage rigidities, the divine coincidence disappears, and central banks indeed face a trade-off between stabilizing inflation and stabilizing the welfare-relevant output gap. We show that not only does the extended model have more realistic normative implications, but it also has appealing positive properties. In particular, it provides a natural interpretation for the dynamic inflation-unemployment relation found in the data.
Resumo:
We construct a utility-based model of fluctuations, with nominal rigidities andunemployment, and draw its implications for the unemployment-inflation trade-off and for the conduct of monetary policy.We proceed in two steps. We first leave nominal rigidities aside. We show that,under a standard utility specification, productivity shocks have no effect onunemployment in the constrained efficient allocation. We then focus on theimplications of alternative real wage setting mechanisms for fluctuations in un-employment. We show the role of labor market frictions and real wage rigiditiesin determining the effects of productivity shocks on unemployment.We then introduce nominal rigidities in the form of staggered price setting byfirms. We derive the relation between inflation and unemployment and discusshow it is influenced by the presence of labor market frictions and real wagerigidities. We show the nature of the tradeoff between inflation and unemployment stabilization, and its dependence on labor market characteristics. We draw the implications for optimal monetary policy.