994 resultados para arm’s length price methodology
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Resumo:
ABSTRACT OBJECTIVE To analyze Government strategies for reducing prices of antiretroviral medicines for HIV in Brazil. METHODS Analysis of Ministry of Health purchases of antiretroviral medicines, from 2005 to 2013. Expenditures and costs of the treatment per year were analyzed and compared to international prices of atazanavir. Price reductions were estimated based on the terms of a voluntary license of patent rights and technology transfer in the Partnership for Productive Development Agreement for atazanavir. RESULTS Atazanavir, a patented medicine, represented a significant share of the expenditures on antiretrovirals purchased from the private sector. Prices in Brazil were higher than international references, and no evidence was found of a relationship between purchase volume and price paid by the Ministry of Health. Concerning the latest strategy to reduce prices, involving local production of the 200 mg capsule, the price reduction was greater than the estimated reduction. As for the 300 mg capsule, the amounts paid in the first two years after the Partnership for Productive Development Agreement were close to the estimated values. Prices in nominal values for both dosage forms remained virtually constant between 2011 (the signature of the Partnership for Productive Development Agreement), 2012 and 2013 (after the establishment of the Partnership). CONCLUSIONS Price reduction of medicines is complex in limited-competition environments. The use of a Partnership for Productive Development Agreement as a strategy to increase the capacity of local production and to reduce prices raises issues regarding its effectiveness in reducing prices and to overcome patent barriers. Investments in research and development that can stimulate technological accumulation should be considered by the Government to strengthen its bargaining power to negotiate medicines prices under a monopoly situation.
Resumo:
A design methodology for monolithic integration of inductor based DC-DC converters is proposed in this paper. A power loss model of the power stage, including the drive circuits, is defined in order to optimize efficiency. Based on this model and taking as reference a 0.35 mu m CMOS process, a buck converter was designed and fabricated. For a given set of operating conditions the defined power loss model allows to optimize the design parameters for the power stage, including the gate-driver tapering factor and the width of the power MOSFETs. Experimental results obtained from a buck converter at 100 MHz switching frequency are presented to validate the proposed methodology.
Resumo:
Published also at Lecture Notes in Engineering and Computer Science
Resumo:
In this paper, a mixed-integer quadratic programming approach is proposed for the short-term hydro scheduling problem, considering head-dependency, discontinuous operating regions and discharge ramping constraints. As new contributions to earlier studies, market uncertainty is introduced in the model via price scenarios, and risk aversion is also incorporated by limiting the volatility of the expected profit through the conditional value-at-risk. Our approach has been applied successfully to solve a case Study based on one of the main Portuguese cascaded hydro systems, requiring a negligible computational time.
Resumo:
In this paper, we study the order of moves in a mixed international duopoly for differentiated goods, where firms choose whether to set prices sequentially or simultaneously. We discuss the desirable role of the public firm by comparing welfare among three games. We find that, in the three possible roles, the domestic public firm put a lower price, and then produces more than the foreign private firm.
Resumo:
Da necessidade de criação de habitações que se adaptem ao constante dinamismo do ser humano, surgiu a ideia da concepção de um edifício evolutivo com base na construção modular. A construção modular poderá ser entendida como sendo a principal resposta ao combate da construção de estruturas físicas e imóveis, com características inalteráveis ao longo do tempo. Com este trabalho idealizou-se uma habitação evolutiva, baseado nos conceitos da Coordenação Modular, identificando-se as suas especificações, exigências de desempenho e características funcionais. Foi considerada como medida modular base 0,10m. Os módulos foram pensados com o intuito de que o seu desempenho prático seja satisfatório. A sua base estrutural foi executada a partir do rearranjo de contentores marítimos. Previu-se ainda, o uso de coberturas verdes por serem uma solução comprovada de minorar notoriamente o efeito pernicioso que a construção tem sobre o meio ambiente, ao mesmo tempo que apresentam vantagens térmicas e acústicas. A questão do incentivo do uso da pré-fabricação em Portugal a uma maior escala, não poderia ser descurada. A apresentação de eventuais desenvolvimentos futuros é também relevante, dada a importância que é inerente ao âmbito da industrialização da construção. Do estudo realizado ficaram ainda, bem patentes, os benefícios da racionalização e da industrialização, tanto em termos económicos como em termos ambientais. É possível executar um projecto com níveis de perdas mais baixos e com índices de qualidade superiores a um menor preço.
Resumo:
A presente dissertação centra-se no estudo de fadiga de uma ponte ferroviária com tabuleiro misto vigado pertencente a uma via de transporte de mercadorias. O caso de estudo incide sobre a ponte ferroviária sobre o rio do Sonho, localizada na Estrada de Ferro de Carajás situada no nordeste do Brasil. Nesta linha circulam alguns dos maiores comboios de mercadoria do mundo com cerca de 3.7 km de extensão e com cargas por eixo superiores a 300 kN. Numa primeira fase apresentam-se diversas metodologias de análise da fadiga em pontes ferroviárias metálicas. É também descrita a ferramenta computacional FADBridge, desenvolvida em ambiente MATLAB, e que possibilita o cálculo sistematizado e eficiente do dano de fadiga em detalhes construtivos de acordo com as indicações dos eurocódigos. Em seguida são abordadas as metodologias numéricas utilizadas para a realização das análises dinâmicas do sistema ponte-comboio e os aspetos regulamentares a ter em consideração no dimensionamento de pontes ferroviárias. O modelo numérico de elementos finitos da ponte foi realizado com recurso ao programa ANSYS. Com base neste modelo foram obtidos os parâmetros modais, nomeadamente as frequências naturais e os modos de vibração, tendo sido também analisada a importância do efeito compósito via-tabuleiro e a influência do comportamento não linear do balastro. O estudo do comportamento dinâmico da ponte foi realizado por intermédio de uma metodologia de cargas móveis através da ferramenta computacional Train-Bridge Interaction (TBI). As análises dinâmicas foram efetuadas para a passagem dos comboios reais de mercadorias e de passageiros e para os comboios de fadiga regulamentares. Nestas análises foi estudada a influência dos modos de vibração globais e locais, das configurações de carga dos comboios e do aumento da velocidade de circulação, na resposta dinâmica da ponte. Por último, foi avaliado o comportamento à fadiga de diversos detalhes construtivos para os cenários de tráfego regulamentar e reais. Foi ainda analisada a influência do aumento da velocidade, da configuração de cargas dos comboios e da degradação da estrutura nos valores do dano por fadiga e da respetiva vida residual.
Resumo:
Thesis submitted to the Faculty of Sciences and Technology, New University of Lisbon, for the degree of Doctor of Philosophy in Environmental Sciences
Resumo:
RESUMO - Assiste-se a um crescimento exponencial das despesas em saúde, quer na Europa como nos Estados Unidos. Em Portugal, os gastos totais com a saúde ascenderam a 10,2% do PIB, em 2006, contra os 8,8% registados no início da década anterior. É importante perceber o que motiva este crescimento quer em termos globais, quer no que diz respeito ao consumo de recursos, bem como até em termos da despesa pública. Este projecto tem dois objectivos fundamentais: em primeiro lugar, contribuir para o estudo dos factores determinantes da procura de cuidados de saúde em Portugal e, consequentemente, determinar as elasticidades procura – preço para diferentes tipos de cuidados de saúde. Metodologia: Estudo observacional baseado na análise empírica de dados administrativos (claims) respeitantes à utilização dos cuidados de saúde por parte de 12.230 indivíduos detentores de um plano de seguro de saúde individual, numa seguradora privada em Portugal. As elasticidades procura – preço para os diferentes tipos de cuidados de saúde obtiveram-se utilizando as variações percentuais das quantidades dos diferentes cuidados de saúde, antes e depois da variação do preço pago pelo indivíduo, para cada tipo de cuidado de saúde. Resultados: De acordo com a teoria económica tradicional o aumento do preço a pagar reduz o consumo de cuidados de saúde, e a procura é elástica, ou seja, os valores da elasticidade procura – preço obtidos são superiores a 1, em valor absoluto, logo o aumento do preço levou a uma redução mais do que proporcional das quantidades procuradas. A procura de cuidados de saúde em ambulatório é mais sensível à variação do preço do que a procura de cuidados de internamento. ------- ABSTRACT - We are witnessing an exponential growth of health care expenditures around the world. In Portugal, the total expenditure on health amounted to 10.2% of GDP in 2006, against 8.8% at the beginning of previous decade. It is important to understand what motivates this growth both in overall terms, with respect to resource consumption, and even in terms of public spending. This study was designed two achieve two objectives: first, to contribute to the study of demand for health care and, more specifically, to analyze the effect of price changes on the utilization of health care services; and secondly, to estimate the demand elasticity for different types of heath care. Methodology: Observational study based on empirical analysis of administrative data (claims) from a private health insurance Company in Portugal. The sample used had information regarding 12.230 individuals. Demand elasticity for the different types of health care services was obtained by the quotient between the percentage changes in the quantity of health care services, before and after the change in the price paid by the corresponding percentage change in the price. Results: This study showed that, for all medical services, price increases were associated with reductions in the quantity of care consumed as predicted by neoclassical demand theory, and we are in the presence of an elastic demand. This means that price elasticity is greater than 1 in absolute value so the increase in the price led to a more than proportional reduction in the quantity demanded. Demand elasticity was more responsive to changes in the price of specialist and emergency care than to changes in the price of inpatient care.
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Eletrotécnica
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
i Gestão de Operações de um armazém Patrícia Raquel Freitas Gomes Relatório de estágio apresentado ao Instituto Superior de Contabilidade e Administração do Porto para obtenção de Grau de Mestre em Logística Orientado por: Prof. Doutora Maria Teresa Ribeiro Pereira Coorientado por: Eng.º César Emanuel Marinho Carvalho Teixeira
Resumo:
In the present study we report the results of an analysis, based on ribotyping of Corynebacterium diphtheriae intermedius strains isolated from a 9 years old child with clinical diphtheria and his 5 contacts. Quantitative analysis of RFLPs of rRNA was used to determine relatedness of these 7 C.diphtheriae strains providing support data in the diphtheria epidemiology. We have also tested those strains for toxigenicity in vitro by using the Elek's gel diffusion method and in vivo by using cell culture method on cultured monkey kidney cell (VERO cells). The hybridization results revealed that the 5 C.diphtheriae strains isolated from contacts and one isolated from the clinical case (nose case strain) had identical RFLP patterns with all 4 restriction endonucleases used, ribotype B. The genetic distance from this ribotype and ribotype A (throat case strain), that we initially assumed to be responsible for the illness of the patient, was of 0.450 showing poor genetic correlation among these two ribotypes. We found no significant differences concerned to the toxin production by using the cell culture method. In conclusion, the use of RFLPs of rRNA gene was successful in detecting minor differences in closely related toxigenic C.diphtheriae intermedius strains and providing information about genetic relationships among them.
Resumo:
According to the hedonic price method, a price of a good is related with the characteristics or the services it provides. Within this framework, the aim of this study it is to examine the effect on room rates of different characteristics of hotels in and around the city of Porto, such as star category, size, room and service quality, hotel facilities and location. It was estimated a hedonic price function, using data for 51 hotels. The results enable to identify the attributes that are important to consumers and hoteliers and to which extent. This information can be used by hotel managers to define a price strategy and helpful in new investment decisions.