36 resultados para free thyroxine index
em Instituto Politécnico do Porto, Portugal
Resumo:
In this paper, we characterize two power indices introduced in [1] using two different modifications of the monotonicity property first stated by [2]. The sets of properties are easily comparable among them and with previous characterizations of other power indices.
Resumo:
Constraints nonlinear optimization problems can be solved using penalty or barrier functions. This strategy, based on solving the problems without constraints obtained from the original problem, have shown to be e ective, particularly when used with direct search methods. An alternative to solve the previous problems is the lters method. The lters method introduced by Fletcher and Ley er in 2002, , has been widely used to solve problems of the type mentioned above. These methods use a strategy di erent from the barrier or penalty functions. The previous functions de ne a new one that combine the objective function and the constraints, while the lters method treat optimization problems as a bi-objective problems that minimize the objective function and a function that aggregates the constraints. Motivated by the work of Audet and Dennis in 2004, using lters method with derivative-free algorithms, the authors developed works where other direct search meth- ods were used, combining their potential with the lters method. More recently. In a new variant of these methods was presented, where it some alternative aggregation restrictions for the construction of lters were proposed. This paper presents a variant of the lters method, more robust than the previous ones, that has been implemented with a safeguard procedure where values of the function and constraints are interlinked and not treated completely independently.
Resumo:
In real optimization problems, usually the analytical expression of the objective function is not known, nor its derivatives, or they are complex. In these cases it becomes essential to use optimization methods where the calculation of the derivatives, or the verification of their existence, is not necessary: the Direct Search Methods or Derivative-free Methods are one solution. When the problem has constraints, penalty functions are often used. Unfortunately the choice of the penalty parameters is, frequently, very difficult, because most strategies for choosing it are heuristics strategies. As an alternative to penalty function appeared the filter methods. A filter algorithm introduces a function that aggregates the constrained violations and constructs a biobjective problem. In this problem the step is accepted if it either reduces the objective function or the constrained violation. This implies that the filter methods are less parameter dependent than a penalty function. In this work, we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of the simplex method and filter methods. This method does not compute or approximate any derivatives, penalty constants or Lagrange multipliers. The basic idea of simplex filter algorithm is to construct an initial simplex and use the simplex to drive the search. We illustrate the behavior of our algorithm through some examples. The proposed methods were implemented in Java.
Resumo:
The filter method is a technique for solving nonlinear programming problems. The filter algorithm has two phases in each iteration. The first one reduces a measure of infeasibility, while in the second the objective function value is reduced. In real optimization problems, usually the objective function is not differentiable or its derivatives are unknown. In these cases it becomes essential to use optimization methods where the calculation of the derivatives or the verification of their existence is not necessary: direct search methods or derivative-free methods are examples of such techniques. In this work we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of simplex and filter methods. This method neither computes nor approximates derivatives, penalty constants or Lagrange multipliers.
Resumo:
O mercado accionista, de uma forma global, tem-se revelado nos últimos tempos uma das principais fontes de incentivo ao mercado de valores mobiliários. O seu impacto junto do público em geral é enorme e a sua importância para as empresas é vital. Interessa, então, perceber como é que a teoria financeira tem obordado a avaliação e a compreensão do processo de formação de uma cotação. Desde os anos 50 até aos dias de hoje, interessa perceber como é que os diferentes autores têm tratado esta abordagem e quais os resultados deste confronto. Interessa sobretudo perceber o abordogem de Stephen Ross e a teoria do arbitragem. Na sequência desta obordagem e com o aparecimento do Multi Index Model, passou a ser possível extimar com maior precisão a evolução da cotação, na medida em que esta estaria dependente de um vasto conjunto de variavéis, que abragem uma vasta área de influência. O contributo de Ross é por isso decisivo. No final interessa reter a melhor técnica e teoria, que defende os interesses do investidor. Face o isto resta, então, saber qual a melhor técnica estatística para proceder a estes estudos empíricos.
Resumo:
Com o desenvolvimento económico das últimas décadas, a gestão de recursos energéticos é um desafio que a sociedade moderna enfrenta. Assim, actualmente há a necessidade da procura de novas fontes de energia, fontes de energia renováveis. Sendo o biodiesel uma fonte de energia renovável, a sua crescente produção irá trazer um aumento da produção de resíduos, como o glicerol e ácidos gordos. É pois importante reduzir/valorizar estes resíduos de forma a impedir a sua acumulação ao longo do tempo. A valorização destes resíduos é o objectivo principal deste trabalho. A primeira parte consistiu na esterificação de ácidos gordos livres com glicerol, na presença de um catalisador ácido, para a produção de monoglicerídeos. Foram utilizados diferentes tipos de matérias-primas: glicerol (76,3%) e resíduo de ácidos gordos (20,8%), fornecidos pela empresa SOCIPOLE SA, glicerol puro (92,2%) e ácido oleico puro (93,1%). Os catalisadores usados foram o cloreto de zinco comercial e o ácido p-tolueonossulfónico comercial. Não foram efectuadas análises específicas aos monoglicerídeos, o produto foi caracterizado pelo índice de acidez. Aparentemente, a maior conversão de ácidos gordos foi obtida no ensaio de esterificação de ácidos gordos com glicerol, ambos da SOCIPOLE SA. No entanto, este não serviu como termo de comparação com os outros devido à formação de uma fase sólida (polímero). Relativamente aos outros ensaios, com razão molar glicerol/ácidos gordos de 1:3, o melhor resultado foi obtido na reacção de glicerol da SOCIPOLE SA com ácido oleico puro, na presença do catalisador ácido p-toluenossulfónico, à temperatura de 106,3ºC e tempo de reacção de 4h30min, sendo a conversão final de ácido oleico 80,7%. Na segunda parte foi feito o estudo da esterificação de ácidos gordos livres com metanol, na presença de ácido sulfúrico, para a produção de biodiesel utilizando ácidos gordos fornecidos pela empresa SOCIPOLE SA, ácidos gordos derivados dos sabões de um resíduo de glicerol fornecido pelo Laboratório de Tecnologia Química, Professora Doutora Lídia Vasconcelos do ISEP e ácidos gordos derivados dos sabões do glicerol bruto, fornecido pela empresa SOCIPOLE SA. Os ensaios foram efectuados a 65ºC, com uma agitação de 120rpm e uma razão molar ácidos gordos/metanol de 1:3. Verificou-se que o índice de acidez do produto, depois de lavado e seco, diminuía com o tempo de reacção e na generalidade a percentagem de ésteres aumentava, observando-se que a partir das seis horas, a reacção se tornava muito lenta. O estudo da razão ácidos gordos/metanol, não permitiu tirar conclusões. O melhor resultado obtido correspondeu a um produto com 96,2% de ésteres metílicos e 8,54mgKOH/gamostra de índice de acidez, pelo que não pode ainda ser designado de biodiesel.
Resumo:
As classificações geomecânicas são uma das abordagens mais reconhecidas para estimar a qualidade do maciço rochoso, face à sua simplicidade e competência para gerir incertezas. As incertezas geológicas e geotécnicas podem ser avaliadas de forma eficaz usando classificações adequadas. Este estudo pretende enfatizar a importância das classificações geomecânicas e índices geomecânicos, tais como a Rock Mass Rating (RMR), a Rock Tunnelling Quality Index (Qsystem), o Geological Strength Index (GSI) e o Hydro‐Potential (HP) Value, para ajuizar a qualidade do maciço rochoso granítico das galerias subterrâneas de Paranhos (setor de Carvalhido ‐ Burgães; área urbana do Porto). Em particular, o valor hidro‐potencial (HP‐value) é uma classificação semi‐quantitativa aplicada a maciços rochosos que permite estimar as infiltrações de água subterrânea em escavações de terrenos rochosos. Para esta avaliação foi compilada e integrada uma extensa base de dados geológico‐geotécnica e geomecânica, apoiada na técnica de amostragem linear de superfícies expostas descontinuidades. Para refinar o zonamento geotécnico do maciço rochoso granítico, previamente realizado em 2010, foram coletadas amostras de rocha em pontos‐chave, com o objetivo de avaliar a sua resistência através do Ensaio de Carga Pontual (PLT). A aplicação das classificações geomecânicas foi realizada de uma forma equilibrada, estabelecendo diferentes cenários e tendo sempre em conta o conhecimento das características do maciço in situ. Apresenta‐se uma proposta de zonamento hidrogeomecânico com o objetivo de compreender melhor a circulação geo‐hidráulica do maciço rochoso granítico. Pretende‐se com esta metodologia contribuir para aprofundar o conhecimento do substrato rochoso do Porto, nomeadamente no que diz respeito ao seu comportamento geomecânico.
Resumo:
Two chromatographic methods, gas chromatography with flow ionization detection (GC–FID) and liquid chromatography with ultraviolet detection (LC–UV), were used to determine furfuryl alcohol in several kinds of foundry resins, after application of an optimised extraction procedure. The GC method developed gave feasibility that did not depend on resin kind. Analysis by LC was suitable just for furanic resins. The presence of interference in the phenolic resins did not allow an appropriate quantification by LC. Both methods gave accurate and precise results. Recoveries were >94%; relative standard deviations were ≤7 and ≤0.3%, respectively for GC and LC methods. Good relative deviations between the two methods were found (≤3%).
Resumo:
Formaldehyde is a toxic component that is present in foundry resins. Its quantification is important to the characterisation of the resin (kind and degradation) as well as for the evaluation of free contaminants present in wastes generated by the foundry industry. The complexity of the matrices considered suggests the need for separative techniques. The method developed for the identification and quantification of formaldehyde in foundry resins is based on the determination of free carbonyl compounds by derivatization with 2,4-dinitrophenylhydrazine (DNPH), being adapted to the considered matrices using liquid chromatography (LC) with UV detection. Formaldehyde determinations in several foundry resins gave precise results. Mean recovery and R.S.D. were, respectively, >95 and 5%. Analyses by the hydroxylamine reference method gave comparable results. Results showed that hydroxylamine reference method is applicable just for a specific kind of resin, while the developed method has good performance for all studied resins.
Resumo:
Phenol is a toxic compound present in a wide variety of foundry resins. Its quantification is important for the characterization of the resins as well as for the evaluation of free contaminants present in foundry wastes. Two chromatographic methods, liquid chromatography with ultraviolet detection (LC-UV) and gas chromatography with flame ionization detection (GC-FID), for the analysis of free phenol in several foundry resins, after a simple extraction procedure (30 min), were developed. Both chromatographic methods were suitable for the determination of phenol in the studied furanic and phenolic resins, showing good selectivity, accuracy (recovery 99–100%; relative deviations <5%), and precision (coefficients of variation <6%). The used ASTM reference method was only found to be useful in the analysis of phenolic resins, while the LC and GC methods were applicable for all the studied resins. The developed methods reduce the time of analysis from 3.5 hours to about 30 min and can readily be used in routine quality control laboratories.
Resumo:
Celiac disease (CD) is an autoimmune enteropathy, characterized by an inappropriate T-cell-mediated immune response to the ingestion of certain dietary cereal proteins in genetically susceptible individuals. This disorder presents environmental, genetic, and immunological components. CD presents a prevalence of up to 1% in populations of European ancestry, yet a high percentage of cases remain underdiagnosed. The diagnosis and treatment should be made early since untreated disease causes growth retardation and atypical symptoms, like infertility or neurological disorders. The diagnostic criteria for CD, which requires endoscopy with small bowel biopsy, have been changing over the last few decades, especially due to the advent of serological tests with higher sensitivity and specificity. The use of serological markers can be very useful to rule out clinical suspicious cases and also to help monitor the patients, after adherence to a gluten-free diet. Since the current treatment consists of a life-long glutenfree diet, which leads to significant clinical and histological improvement, the standardization of an assay to assess in an unequivocal way gluten in gluten-free foodstuff is of major importance.
Resumo:
This communication presents a novel kind of silicon nanomaterial: freestanding Si nanowire arrays (Si NWAs), which are synthesized facilely by one-step template-free electro-deoxidation of SiO2 in molten CaCl2. The self-assembling growth process of this material is also investigated preliminarily.
Resumo:
The TEM family of enzymes has had a crucial impact on the pharmaceutical industry due to their important role in antibiotic resistance. Even with the latest technologies in structural biology and genomics, no 3D structure of a TEM- 1/antibiotic complex is known previous to acylation. Therefore, the comprehension of their capability in acylate antibiotics is based on the protein macromolecular structure uncomplexed. In this work, molecular docking, molecular dynamic simulations, and relative free energy calculations were applied in order to get a comprehensive and thorough analysis of TEM-1/ampicillin and TEM-1/amoxicillin complexes. We described the complexes and analyzed the effect of ligand binding on the overall structure. We clearly demonstrate that the key residues involved in the stability of the ligand (hot-spots) vary with the nature of the ligand. Structural effects such as (i) the distances between interfacial residues (Ser70−Oγ and Lys73−Nζ, Lys73−Nζ and Ser130−Oγ, and Ser70−Oγ−Ser130−Oγ), (ii) side chain rotamer variation (Tyr105 and Glu240), and (iii) the presence of conserved waters can be also influenced by ligand binding. This study supports the hypothesis that TEM-1 suffers structural modifications upon ligand binding.
Resumo:
Power law PL and fractional calculus are two faces of phenomena with long memory behavior. This paper applies PL description to analyze different periods of the business cycle. With such purpose the evolution of ten important stock market indices DAX, Dow Jones, NASDAQ, Nikkei, NYSE, S&P500, SSEC, HSI, TWII, and BSE over time is studied. An evolutionary algorithm is used for the fitting of the PL parameters. It is observed that the PL curve fitting constitutes a good tool for revealing the signal main characteristics leading to the emergence of the global financial dynamic evolution.
Resumo:
It is widely assumed that scheduling real-time tasks becomes more difficult as their deadlines get shorter. With deadlines shorter, however, tasks potentially compete less with each other for processors, and this could produce more contention-free slots at which the number of competing tasks is smaller than or equal to the number of available processors. This paper presents a policy (called CF policy) that utilizes such contention-free slots effectively. This policy can be employed by any work-conserving, preemptive scheduling algorithm, and we show that any algorithm extended with this policy dominates the original algorithm in terms of schedulability. We also present improved schedulability tests for algorithms that employ this policy, based on the observation that interference from tasks is reduced when their executions are postponed to contention-free slots. Finally, using the properties of the CF policy, we derive a counter-intuitive claim that shortening of task deadlines can help improve schedulability of task systems. We present heuristics that effectively reduce task deadlines for better scheduability without performing any exhaustive search.