15 resultados para Borrowing constraint

em Repositório Científico do Instituto Politécnico de Lisboa - Portugal


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Topology optimization consists in finding the spatial distribution of a given total volume of material for the resulting structure to have some optimal property, for instance, maximization of structural stiffness or maximization of the fundamental eigenfrequency. In this paper a Genetic Algorithm (GA) employing a representation method based on trees is developed to generate initial feasible individuals that remain feasible upon crossover and mutation and as such do not require any repairing operator to ensure feasibility. Several application examples are studied involving the topology optimization of structures where the objective functions is the maximization of the stiffness and the maximization of the first and the second eigenfrequencies of a plate, all cases having a prescribed material volume constraint.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent literature has proved that many classical pricing models (Black and Scholes, Heston, etc.) and risk measures (V aR, CV aR, etc.) may lead to “pathological meaningless situations”, since traders can build sequences of portfolios whose risk leveltends to −infinity and whose expected return tends to +infinity, i.e., (risk = −infinity, return = +infinity). Such a sequence of strategies may be called “good deal”. This paper focuses on the risk measures V aR and CV aR and analyzes this caveat in a discrete time complete pricing model. Under quite general conditions the explicit expression of a good deal is given, and its sensitivity with respect to some possible measurement errors is provided too. We point out that a critical property is the absence of short sales. In such a case we first construct a “shadow riskless asset” (SRA) without short sales and then the good deal is given by borrowing more and more money so as to invest in the SRA. It is also shown that the SRA is interested by itself, even if there are short selling restrictions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study the implications for two-Higgs-doublet models of the recent announcement at the LHC giving a tantalizing hint for a Higgs boson of mass 125 GeV decaying into two photons. We require that the experimental result be within a factor of 2 of the theoretical standard model prediction, and analyze the type I and type II models as well as the lepton-specific and flipped models, subject to this requirement. It is assumed that there is no new physics other than two Higgs doublets. In all of the models, we display the allowed region of parameter space taking the recent LHC announcement at face value, and we analyze the W+W-, ZZ, (b) over barb, and tau(+)tau(-) expectations in these allowed regions. Throughout the entire range of parameter space allowed by the gamma gamma constraint, the numbers of events for Higgs decays into WW, ZZ, and b (b) over bar are not changed from the standard model by more than a factor of 2. In contrast, in the lepton-specific model, decays to tau(+)tau(-) are very sensitive across the entire gamma gamma-allowed region.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Os sistemas fotovoltaicos produzem energia eléctrica limpa, e inesgotável na nossa escala temporal. A Agência Internacional de Energia encara a tecnologia fotovoltaica como uma das mais promissoras, esperando nas suas previsões mais optimistas, que em 2050 possa representar 20% da produção eléctrica mundial, o equivalente a 18000 TWh. No entanto, e apesar do desenvolvimento notável nas últimas décadas, a principal condicionante a uma maior proliferação destes sistemas é o ainda elevado custo, aliado ao seu fraco desempenho global. Apesar do custo e ineficiência dos módulos fotovoltaicos ter vindo a diminuir, o rendimento dos sistemas contínua dependente de factores externos sujeitos a grande variabilidade, como a temperatura e a irradiância, e às limitações tecnológicas e falta de sinergia dos seus equipamentos constituintes. Neste sentido procurou-se como objectivo na elaboração desta dissertação, avaliar o potencial de optimização dos sistemas fotovoltaicos recorrendo a técnicas de modelação e simulação. Para o efeito, em primeiro lugar foram identificados os principais factores que condicionam o desempenho destes sistemas. Em segundo lugar, e como caso prático de estudo, procedeu-se à modelação de algumas configurações de sistemas fotovoltaicos, e respectivos componentes em ambiente MatlabTM/SimulinkTM. Em seguida procedeu-se à análise das principais vantagens e desvantagens da utilização de diversas ferramentas de modelação na optimização destes sistemas, assim como da incorporação de técnicas de inteligência artificial para responder aos novos desafios que esta tecnologia enfrentará no futuro. Através deste estudo, conclui-se que a modelação é não só um instrumento útil para a optimização dos actuais sistemas PV, como será, certamente uma ferramenta imprescindível para responder aos desafios das novas aplicações desta tecnologia. Neste último ponto as técnicas de modelação com recurso a inteligência artificial (IA) terão seguramente um papel preponderante. O caso prático de modelação realizado permitiu concluir que esta é igualmente uma ferramenta útil no apoio ao ensino e investigação. Contudo, convém não esquecer que um modelo é apenas uma aproximação à realidade, devendo recorrer-se sempre ao sentido crítico na interpretação dos seus resultados.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação apresentada à Escola Superior de Educação para a obtenção do Grau de Mestre em Ciências da Educação, especialidade em Supervisão em Educação

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação apresentada à Escola Superior de Educação de Lisboa para a obtenção do Grau de Mestre em Ciências da Educação - especialidade Supervisão em Educação

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chapter in Book Proceedings with Peer Review First Iberian Conference, IbPRIA 2003, Puerto de Andratx, Mallorca, Spain, JUne 4-6, 2003. Proceedings

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação apresentada à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Audiovisual e Multimédia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a methodology to establish investment and trading strategies of a power generation company. These strategies are integrated in the ITEM-Game simulator in order to test their results when played against defined strategies used by other players. The developed strategies are focused on investment decisions, although trading strategies are also implemented to obtain base case results. Two cases are studied considering three players with the same trading strategy. In case 1, all players also have the same investment strategy driven by a market target share. In case 2, player 1 has an improved investment strategy with a target share twice of the target of players 2 and 3. Results put in evidence the influence of the CO2 and fuel prices in the company investment decision. It is also observed the influence of the budget constraint which might prevent the player to take the desired investment decision.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho de Projecto submetida(o) à Escola Superior de Teatro e Cinema para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Teatro - especialização em Artes Performativas - Interpretação

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is on the self-scheduling problem for a thermal power producer taking part in a pool-based electricity market as a price-taker, having bilateral contracts and emission-constrained. An approach based on stochastic mixed-integer linear programming approach is proposed for solving the self-scheduling problem. Uncertainty regarding electricity price is considered through a set of scenarios computed by simulation and scenario-reduction. Thermal units are modelled by variable costs, start-up costs and technical operating constraints, such as: forbidden operating zones, ramp up/down limits and minimum up/down time limits. A requirement on emission allowances to mitigate carbon footprint is modelled by a stochastic constraint. Supply functions for different emission allowance levels are accessed in order to establish the optimal bidding strategy. A case study is presented to illustrate the usefulness and the proficiency of the proposed approach in supporting biding strategies. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Motivated by the dark matter and the baryon asymmetry problems, we analyze a complex singlet extension of the Standard Model with a Z(2) symmetry (which provides a dark matter candidate). After a detailed two-loop calculation of the renormalization group equations for the new scalar sector, we study the radiative stability of the model up to a high energy scale (with the constraint that the 126 GeV Higgs boson found at the LHC is in the spectrum) and find it requires the existence of a new scalar state mixing with the Higgs with a mass larger than 140 GeV. This bound is not very sensitive to the cutoff scale as long as the latter is larger than 10(10) GeV. We then include all experimental and observational constraints/measurements from collider data, from dark matter direct detection experiments, and from the Planck satellite and in addition force stability at least up to the grand unified theory scale, to find that the lower bound is raised to about 170 GeV, while the dark matter particle must be heavier than about 50 GeV.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of biopharmaceutical manufacturing processes presents critical constraints, with the major constraint being that living cells synthesize these molecules, presenting inherent behavior variability due to their high sensitivity to small fluctuations in the cultivation environment. To speed up the development process and to control this critical manufacturing step, it is relevant to develop high-throughput and in situ monitoring techniques, respectively. Here, high-throughput mid-infrared (MIR) spectral analysis of dehydrated cell pellets and in situ near-infrared (NIR) spectral analysis of the whole culture broth were compared to monitor plasmid production in recombinant Escherichia coil cultures. Good partial least squares (PLS) regression models were built, either based on MIR or NIR spectral data, yielding high coefficients of determination (R-2) and low predictive errors (root mean square error, or RMSE) to estimate host cell growth, plasmid production, carbon source consumption (glucose and glycerol), and by-product acetate production and consumption. The predictive errors for biomass, plasmid, glucose, glycerol, and acetate based on MIR data were 0.7 g/L, 9 mg/L, 0.3 g/L, 0.4 g/L, and 0.4 g/L, respectively, whereas for NIR data the predictive errors obtained were 0.4 g/L, 8 mg/L, 0.3 g/L, 0.2 g/L, and 0.4 g/L, respectively. The models obtained are robust as they are valid for cultivations conducted with different media compositions and with different cultivation strategies (batch and fed-batch). Besides being conducted in situ with a sterilized fiber optic probe, NIR spectroscopy allows building PLS models for estimating plasmid, glucose, and acetate that are as accurate as those obtained from the high-throughput MIR setup, and better models for estimating biomass and glycerol, yielding a decrease in 57 and 50% of the RMSE, respectively, compared to the MIR setup. However, MIR spectroscopy could be a valid alternative in the case of optimization protocols, due to possible space constraints or high costs associated with the use of multi-fiber optic probes for multi-bioreactors. In this case, MIR could be conducted in a high-throughput manner, analyzing hundreds of culture samples in a rapid and automatic mode.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the framework of multibody dynamics, the path motion constraint enforces that a body follows a predefined curve being its rotations with respect to the curve moving frame also prescribed. The kinematic constraint formulation requires the evaluation of the fourth derivative of the curve with respect to its arc length. Regardless of the fact that higher order polynomials lead to unwanted curve oscillations, at least a fifth order polynomials is required to formulate this constraint. From the point of view of geometric control lower order polynomials are preferred. This work shows that for multibody dynamic formulations with dependent coordinates the use of cubic polynomials is possible, being the dynamic response similar to that obtained with higher order polynomials. The stabilization of the equations of motion, always required to control the constraint violations during long analysis periods due to the inherent numerical errors of the integration process, is enough to correct the error introduced by using a lower order polynomial interpolation and thus forfeiting the analytical requirement for higher order polynomials.