58 resultados para Single-process Models
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Estudi elaborat a partir d’una estada a la Plataforma Solar de Almería entre desembre del 2006 i gener del 2007. S’ha dut a terme la degradació en planta pilot dels colorants reactius Procion Red H-E7B i Cibacron Red FN-R mitjançant el procés de foto-Fenton aplicat com a tractament únic i com a pretractament d’un procés biològic. El procés de foto-Fenton, assistit amb llum solar, es va realitzar en un fotoreactor solar tipus Col•lector Parabòlic Compost (CPC) i el tractament biològic en un Reactor de Biomassa Immobilitzada (RBI). Com a punt de partida, i amb l’objectiu d’estudiar la reproductibilitat del sistema, es van prendre resultats obtinguts d’experiments realitzats prèviament a escala de laboratori i amb llum artificial. El paràmetre Carboni Orgànic Total (COT) es va emprar com a indicador de l’eliminació dels colorants i dels seus intermedis. En aplicar únicament el procés de foto-Fenton com a tractament, concentracions de 10 mg•l-1 de Fe (II) i 250 mg•l-1 de H2O2 per degradar 250 mg•l-1 Procion Red H-E7B, i de 20 mg•l-1 de Fe (II) i 500 mg•l-1 de H2O2 per degradar 250 mg•l-1 Cibacron Red FN-R, van reproduir els resultants obtinguts al laboratori, amb uns nivells d’eliminació de COT del 82 i 86%, respectivament. A més, l’ús beneficiós de la llum solar en el procés de foto-Fenton, juntament amb la configuració del CPC, van incrementar la velocitat de degradació respecte als resultats previs, permetent la reducció de la concentració de Fe (II) de 10 a 2 mg•l-1 (Procion Red H-E7B) i de 20 a 5 mg•l-1 (Cibacron Red FN-R) sense pèrdues d’efectivitat. D’altre banda, el sistema combinat foto-Fenton/tractament biològic en planta pilot, unes concentracions d’oxidant de 225 mg•l-1 H2O2 per Cibacron Red FN-R i 65 mg•l-1 H2O2 per Procion Red H-E7B van ser suficients per generar solucions intermèdies biodegradables i alimentar així el RBI, millorant inclús els resultats obtinguts prèviament al laboratori.
Resumo:
The relationship between competition and performance-related pay has been analysed in single-principal-single-agent models. While this approach yields good predictions for managerial pay schemes, the predictions fail to apply for employees at lower tiers of a firm's hierarchy. In this paper, a principal-multi-agent model of incentive pay is developed which makes it possible to analyze the effect of changes in the competitiveness of markets on lower tier incentive payment schemes. The results explain why the payment schemes of agents located at low and mid tiers are less sensitive to changes in competition when aggregated firm data is used. JEL classification numbers: D82, J21, L13, L22. Keywords: Cournot competition, Contract delegation, Moral hazard, Entry, Market size, Wage cost.
Resumo:
The relationship between competition and performance-related pay has been analyzed in single-principal-single-agent models. While this approach yields good predictions for managerial pay schemes, the predictions fail to apply for employees at lower tiers of a firm's hierarchy. In this paper, a principal-multi-agent model of incentive pay is developed which makes it possible to analyze the effect of changes in the competitiveness of markets on lower tier incentive payment schemes. The results explain why the payment schemes of agents located at low and mid tiers are less sensitive to changes in competition when aggregated firm data is used. Journal of Economic Literature classiffication numbers: D82, J21, L13, L22. Keywords: Cournot Competition, Contract Delegation, Moral Hazard, Entry, Market Size, Wage Cost.
Resumo:
Sediment composition is mainly controlled by the nature of the source rock(s), and chemical (weathering) and physical processes (mechanical crushing, abrasion, hydrodynamic sorting) during alteration and transport. Although the factors controlling these processes are conceptually well understood, detailed quantification of compositional changes induced by a single process are rare, as are examples where the effects of several processes can be distinguished. The present study was designed to characterize the role of mechanical crushing and sorting in the absence of chemical weathering. Twenty sediment samples were taken from Alpine glaciers that erode almost pure granitoid lithologies. For each sample, 11 grain-size fractions from granules to clay (ø grades &-1 to &9) were separated, and each fraction was analysed for its chemical composition.The presence of clear steps in the box-plots of all parts (in adequate ilr and clr scales) against ø is assumed to be explained by typical crystal size ranges for the relevant mineral phases. These scatter plots and the biplot suggest a splitting of the full grain size range into three groups: coarser than ø=4 (comparatively rich in SiO2, Na2O, K2O, Al2O3, and dominated by “felsic” minerals like quartz and feldspar), finer than ø=8 (comparatively rich in TiO2, MnO, MgO, Fe2O3, mostly related to “mafic” sheet silicates like biotite and chlorite), and intermediate grains sizes (4≤ø &8; comparatively rich in P2O5 and CaO, related to apatite, some feldspar).To further test the absence of chemical weathering, the observed compositions were regressed against three explanatory variables: a trend on grain size in ø scale, a step function for ø≥4, and another for ø≥8. The original hypothesis was that the trend could be identified with weathering effects, whereas each step function would highlight those minerals with biggest characteristic size at its lower end. Results suggest that this assumption is reasonable for the step function, but that besides weathering some other factors (different mechanical behavior of minerals) have also an important contribution to the trend.Key words: sediment, geochemistry, grain size, regression, step function
Resumo:
A bidimensional array based on single-photon avalanche diodes for triggered imaging systems is presented. The diodes are operated in the gated mode of acquisition to reduce the probability to detect noise counts interfering with photon arrival events. In addition, low reverse bias overvoltages are used to lessen the dark count rate. Experimental results demonstrate that the prototype fabricated with a standard HV-CMOS process gets rid of afterpulses and offers a reduced dark count probability by applying the proposed modes of operation. The detector exhibits a dynamic range of 15 bits with short gated"on" periods of 10ns and a reverse bias overvoltage of 1.0V.
Resumo:
Markowitz portfolio theory (1952) has induced research into the efficiency of portfolio management. This paper studies existing nonparametric efficiency measurement approaches for single period portfolio selection from a theoretical perspective and generalises currently used efficiency measures into the full mean-variance space. Therefore, we introduce the efficiency improvement possibility function (a variation on the shortage function), study its axiomatic properties in the context of Markowitz efficient frontier, and establish a link to the indirect mean-variance utility function. This framework allows distinguishing between portfolio efficiency and allocative efficiency. Furthermore, it permits retrieving information about the revealed risk aversion of investors. The efficiency improvement possibility function thus provides a more general framework for gauging the efficiency of portfolio management using nonparametric frontier envelopment methods based on quadratic optimisation.
Resumo:
This comment corrects the errors in the estimation process that appear in Martins (2001). The first error is in the parametric probit estimation, as the previously presented results do not maximize the log-likelihood function. In the global maximum more variables become significant. As for the semiparametric estimation method, the kernel function used in Martins (2001) can take on both positive and negative values, which implies that the participation probability estimates may be outside the interval [0,1]. We have solved the problem by applying local smoothing in the kernel estimation, as suggested by Klein and Spady (1993).
Resumo:
We characterize the class of strategy-proof social choice functions on the domain of symmetric single-peaked preferences. This class is strictly larger than the set of generalized median voter schemes (the class of strategy-proof and tops-only social choice functions on the domain of single-peaked preferences characterized by Moulin (1980)) since, under the domain of symmetric single-peaked preferences, generalized median voter schemes can be disturbed by discontinuity points and remain strategy-proof on the smaller domain. Our result identifies the specific nature of these discontinuities which allow to design non-onto social choice functions to deal with feasibility constraints.
Resumo:
In this paper, we present a stochastic model for disability insurance contracts. The model is based on a discrete time non-homogeneous semi-Markov process (DTNHSMP) to which the backward recurrence time process is introduced. This permits a more exhaustive study of disability evolution and a more efficient approach to the duration problem. The use of semi-Markov reward processes facilitates the possibility of deriving equations of the prospective and retrospective mathematical reserves. The model is applied to a sample of contracts drawn at random from a mutual insurance company.
Resumo:
Actualment a l'Estat espanyol s'està implantant el Pla Bolonya per incorporar-se a l'Espai Europeu d'Estudis Superiors (l'EEES). Com a un dels principals objectius, l'EEES pretén homogeneïtzar els estudis i de manera concreta les competències adquirides per qualsevol estudiant independentment d'on hagi realitzat els seus estudis. Per això, existeixen iniciatives europees (com el projecte Tuning) que treballen per definir competències per a totes les titulacions universitàries.El projecte presenta l'anàlisi realitzat sobre vint Universitats de diferents continents per identificar models d'ensenyament-aprenentatge de competències no tècniques. La recerca es centra addicionalment en la competència comunicativa escrita.La font principal de dades ha estat la informació proporcionada a les pàgines Web de les universitats i molt especialment els seus plans d'estudi.
Resumo:
In this paper we present a novel structure from motion (SfM) approach able to infer 3D deformable models from uncalibrated stereo images. Using a stereo setup dramatically improves the 3D model estimation when the observed 3D shape is mostly deforming without undergoing strong rigid motion. Our approach first calibrates the stereo system automatically and then computes a single metric rigid structure for each frame. Afterwards, these 3D shapes are aligned to a reference view using a RANSAC method in order to compute the mean shape of the object and to select the subset of points on the object which have remained rigid throughout the sequence without deforming. The selected rigid points are then used to compute frame-wise shape registration and to extract the motion parameters robustly from frame to frame. Finally, all this information is used in a global optimization stage with bundle adjustment which allows to refine the frame-wise initial solution and also to recover the non-rigid 3D model. We show results on synthetic and real data that prove the performance of the proposed method even when there is no rigid motion in the original sequence
Resumo:
Els mètodes de detecció, diagnosi i aïllament de fallades (Fault Detection and Isolation - FDI) basats en la redundància analítica (és a dir, la comparació del comportament actual del procés amb l’esperat, obtingut mitjançant un model matemàtic del mateix), són àmpliament utilitzats per al diagnòstic de sistemes quan el model matemàtic està disponible. S’ha implementat un algoritme per implementar aquesta redundància analítica a partir del model de la plana conegut com a Anàlisi Estructural
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
This paper investigates the role of learning by private agents and the central bank (two-sided learning) in a New Keynesian framework in which both sides of the economy have asymmetric and imperfect knowledge about the true data generating process. We assume that all agents employ the data that they observe (which may be distinct for different sets of agents) to form beliefs about unknown aspects of the true model of the economy, use their beliefs to decide on actions, and revise these beliefs through a statistical learning algorithm as new information becomes available. We study the short-run dynamics of our model and derive its policy recommendations, particularly with respect to central bank communications. We demonstrate that two-sided learning can generate substantial increases in volatility and persistence, and alter the behavior of the variables in the model in a signifficant way. Our simulations do not converge to a symmetric rational expectations equilibrium and we highlight one source that invalidates the convergence results of Marcet and Sargent (1989). Finally, we identify a novel aspect of central bank communication in models of learning: communication can be harmful if the central bank's model is substantially mis-specified