994 resultados para range-domain pairs
Resumo:
Não existe uma definição única de processo de memória de longo prazo. Esse processo é geralmente definido como uma série que possui um correlograma decaindo lentamente ou um espectro infinito de frequência zero. Também se refere que uma série com tal propriedade é caracterizada pela dependência a longo prazo e por não periódicos ciclos longos, ou que essa característica descreve a estrutura de correlação de uma série de longos desfasamentos ou que é convencionalmente expressa em termos do declínio da lei-potência da função auto-covariância. O interesse crescente da investigação internacional no aprofundamento do tema é justificado pela procura de um melhor entendimento da natureza dinâmica das séries temporais dos preços dos ativos financeiros. Em primeiro lugar, a falta de consistência entre os resultados reclama novos estudos e a utilização de várias metodologias complementares. Em segundo lugar, a confirmação de processos de memória longa tem implicações relevantes ao nível da (1) modelação teórica e econométrica (i.e., dos modelos martingale de preços e das regras técnicas de negociação), (2) dos testes estatísticos aos modelos de equilíbrio e avaliação, (3) das decisões ótimas de consumo / poupança e de portefólio e (4) da medição de eficiência e racionalidade. Em terceiro lugar, ainda permanecem questões científicas empíricas sobre a identificação do modelo geral teórico de mercado mais adequado para modelar a difusão das séries. Em quarto lugar, aos reguladores e gestores de risco importa saber se existem mercados persistentes e, por isso, ineficientes, que, portanto, possam produzir retornos anormais. O objetivo do trabalho de investigação da dissertação é duplo. Por um lado, pretende proporcionar conhecimento adicional para o debate da memória de longo prazo, debruçando-se sobre o comportamento das séries diárias de retornos dos principais índices acionistas da EURONEXT. Por outro lado, pretende contribuir para o aperfeiçoamento do capital asset pricing model CAPM, considerando uma medida de risco alternativa capaz de ultrapassar os constrangimentos da hipótese de mercado eficiente EMH na presença de séries financeiras com processos sem incrementos independentes e identicamente distribuídos (i.i.d.). O estudo empírico indica a possibilidade de utilização alternativa das obrigações do tesouro (OT’s) com maturidade de longo prazo no cálculo dos retornos do mercado, dado que o seu comportamento nos mercados de dívida soberana reflete a confiança dos investidores nas condições financeiras dos Estados e mede a forma como avaliam as respetiva economias com base no desempenho da generalidade dos seus ativos. Embora o modelo de difusão de preços definido pelo movimento Browniano geométrico gBm alegue proporcionar um bom ajustamento das séries temporais financeiras, os seus pressupostos de normalidade, estacionariedade e independência das inovações residuais são adulterados pelos dados empíricos analisados. Por isso, na procura de evidências sobre a propriedade de memória longa nos mercados recorre-se à rescaled-range analysis R/S e à detrended fluctuation analysis DFA, sob abordagem do movimento Browniano fracionário fBm, para estimar o expoente Hurst H em relação às séries de dados completas e para calcular o expoente Hurst “local” H t em janelas móveis. Complementarmente, são realizados testes estatísticos de hipóteses através do rescaled-range tests R/S , do modified rescaled-range test M - R/S e do fractional differencing test GPH. Em termos de uma conclusão única a partir de todos os métodos sobre a natureza da dependência para o mercado acionista em geral, os resultados empíricos são inconclusivos. Isso quer dizer que o grau de memória de longo prazo e, assim, qualquer classificação, depende de cada mercado particular. No entanto, os resultados gerais maioritariamente positivos suportam a presença de memória longa, sob a forma de persistência, nos retornos acionistas da Bélgica, Holanda e Portugal. Isto sugere que estes mercados estão mais sujeitos a maior previsibilidade (“efeito José”), mas também a tendências que podem ser inesperadamente interrompidas por descontinuidades (“efeito Noé”), e, por isso, tendem a ser mais arriscados para negociar. Apesar da evidência de dinâmica fractal ter suporte estatístico fraco, em sintonia com a maior parte dos estudos internacionais, refuta a hipótese de passeio aleatório com incrementos i.i.d., que é a base da EMH na sua forma fraca. Atendendo a isso, propõem-se contributos para aperfeiçoamento do CAPM, através da proposta de uma nova fractal capital market line FCML e de uma nova fractal security market line FSML. A nova proposta sugere que o elemento de risco (para o mercado e para um ativo) seja dado pelo expoente H de Hurst para desfasamentos de longo prazo dos retornos acionistas. O expoente H mede o grau de memória de longo prazo nos índices acionistas, quer quando as séries de retornos seguem um processo i.i.d. não correlacionado, descrito pelo gBm(em que H = 0,5 , confirmando- se a EMH e adequando-se o CAPM), quer quando seguem um processo com dependência estatística, descrito pelo fBm(em que H é diferente de 0,5, rejeitando-se a EMH e desadequando-se o CAPM). A vantagem da FCML e da FSML é que a medida de memória de longo prazo, definida por H, é a referência adequada para traduzir o risco em modelos que possam ser aplicados a séries de dados que sigam processos i.i.d. e processos com dependência não linear. Então, estas formulações contemplam a EMH como um caso particular possível.
Resumo:
Consider a wireless sensor network (WSN) where a broadcast from a sensor node does not reach all sensor nodes in the network; such networks are often called multihop networks. Sensor nodes take individual sensor readings, however, in many cases, it is relevant to compute aggregated quantities of these readings. In fact, the minimum and maximum of all sensor readings at an instant are often interesting because they indicate abnormal behavior, for example if the maximum temperature is very high then it may be that a fire has broken out. In this context, we propose an algorithm for computing the min or max of sensor readings in a multihop network. This algorithm has the particularly interesting property of having a time complexity that does not depend on the number of sensor nodes; only the network diameter and the range of the value domain of sensor readings matter.
Resumo:
WiDom is a previously proposed prioritized medium access control protocol for wireless channels. We present a modification to this protocol in order to improve its reliability. This modification has similarities with cooperative relaying schemes, but, in our protocol, all nodes can relay a carrier wave. The preliminary evaluation shows that, under transmission errors, a significant reduction on the number of failed tournaments can be achieved.
Resumo:
Dalton Trans., 2003, 3328-3338
Resumo:
Consider a wireless sensor network (WSN) where a broadcast from a sensor node does not reach all sensor nodes in the network; such networks are often called multihop networks. Sensor nodes take sensor readings but individual sensor readings are not very important. It is important however to compute aggregated quantities of these sensor readings. The minimum and maximum of all sensor readings at an instant are often interesting because they indicate abnormal behavior, for example if the maximum temperature is very high then it may be that a fire has broken out. We propose an algorithm for computing the min or max of sensor reading in a multihop network. This algorithm has the particularly interesting property of having a time complexity that does not depend on the number of sensor nodes; only the network diameter and the range of the value domain of sensor readings matter.
Resumo:
Dissertação de Mestrado em Engenharia Informática
Resumo:
The process of resources systems selection takes an important part in Distributed/Agile/Virtual Enterprises (D/A/V Es) integration. However, the resources systems selection is still a difficult matter to solve in a D/A/VE, as it is pointed out in this paper. Globally, we can say that the selection problem has been equated from different aspects, originating different kinds of models/algorithms to solve it. In order to assist the development of a web prototype tool (broker tool), intelligent and flexible, that integrates all the selection model activities and tools, and with the capacity to adequate to each D/A/V E project or instance (this is the major goal of our final project), we intend in this paper to show: a formulation of a kind of resources selection problem and the limitations of the algorithms proposed to solve it. We formulate a particular case of the problem as an integer programming, which is solved using simplex and branch and bound algorithms, and identify their performance limitations (in terms of processing time) based on simulation results. These limitations depend on the number of processing tasks and on the number of pre-selected resources per processing tasks, defining the domain of applicability of the algorithms for the problem studied. The limitations detected open the necessity of the application of other kind of algorithms (approximate solution algorithms) outside the domain of applicability founded for the algorithms simulated. However, for a broker tool it is very important the knowledge of algorithms limitations, in order to, based on problem features, develop and select the most suitable algorithm that guarantees a good performance.
Resumo:
Let X be a finite or infinite chain and let be the monoid of all endomorphisms of X. In this paper, we describe the largest regular subsemigroup of and Green's relations on. In fact, more generally, if Y is a nonempty subset of X and is the subsemigroup of of all elements with range contained in Y, we characterize the largest regular subsemigroup of and Green's relations on. Moreover for finite chains, we determine when two semigroups of the type are isomorphic and calculate their ranks.
Resumo:
A study of chemical transformations of cork during heat treatments was made using colour variation and FTIR analysis. The cork enriched fractions from Quercus cerris bark were subjected to isothermal heating in the temperature range 150–400 ◦C and treatment time from 5 to 90 min. Mass loss ranged from 3% (90 min at 150 ◦C) to 71% (60 min at 350 ◦C). FTIR showed that hemicelluloses were thermally degraded first while suberin remained as the most heat resistant component. The change of CIE-Lab parameters was rapid for low intensity treatments where no significant mass loss occurred (at 150 ◦C L* decreased from the initial 51.5 to 37.3 after 20 min). The decrease in all colour parameters continued with temperature until they remained substantially constant with over 40% mass loss. Modelling of the thermally induced mass loss could be made using colour analysis. This is applicable to monitoring the production of heat expanded insulation agglomerates.
Resumo:
Electronics Letters Vol.38, nº 19
Resumo:
The effect of monopolar and bipolar shaped pulses in additional yield of apple juice extraction is evaluated. The applied electric field strength, pulsewidth, and number of pulses are assessed for both pulse types, and divergences are analyzed. Variation of electric field strength is ranged from 100 to 1300 V/cm, pulsewidth from 20 to 300 mu s, and the number of pulses from 10 to 200, at a frequency of 200 Hz. Two pulse trains separated by 1 s are applied to apple cubes. Results are plotted against reference untreated samples for all assays. Specific energy consumption is calculated for each experiment as well as qualitative indicators for apple juice of total soluble dry matter and absorbance at 390-nm wavelength. Bipolar pulses demonstrated higher efficiency, and specific energetic consumption has a threshold where higher inputs of energy do not result in higher juice extraction when electric field variation is applied. Total soluble dry matter and absorbance results do not illustrate significant differences between application of monopolar and bipolar pulses, but all values are inside the limits proposed for apple juice intended for human consumption.
Resumo:
In this paper we propose the use of the least-squares based methods for obtaining digital rational approximations (IIR filters) to fractional-order integrators and differentiators of type sα, α∈R. Adoption of the Padé, Prony and Shanks techniques is suggested. These techniques are usually applied in the signal modeling of deterministic signals. These methods yield suboptimal solutions to the problem which only requires finding the solution of a set of linear equations. The results reveal that the least-squares approach gives similar or superior approximations in comparison with other widely used methods. Their effectiveness is illustrated, both in the time and frequency domains, as well in the fractional differintegration of some standard time domain functions.
Resumo:
15th IEEE International Conference on Electronics, Circuits and Systems, Malta
Resumo:
In this brief, a read-only-memoryless structure for binary-to-residue number system (RNS) conversion modulo {2(n) +/- k} is proposed. This structure is based only on adders and constant multipliers. This brief is motivated by the existing {2(n) +/- k} binary-to-RNS converters, which are particular inefficient for larger values of n. The experimental results obtained for 4n and 8n bits of dynamic range suggest that the proposed conversion structures are able to significantly improve the forward conversion efficiency, with an AT metric improvement above 100%, regarding the related state of the art. Delay improvements of 2.17 times with only 5% area increase can be achieved if a proper selection of the {2(n) +/- k} moduli is performed.
Resumo:
The interest for environmental fate assessment of chiral pharmaceuticals is increasing and enantioselective analytical methods are mandatory. This study presents an enantioselective analytical method for the quantification of seven pairs of enantiomers of pharmaceuticals and a pair of a metabolite. The selected chiral pharmaceuticals belong to three different therapeutic classes, namely selective serotonin reuptake inhibitors (venlafaxine, fluoxetine and its metabolite norfluoxetine), beta-blockers (alprenolol, bisoprolol, metoprolol, propranolol) and a beta2-adrenergic agonist (salbutamol). The analytical method was based on solid phase extraction followed by liquid chromatography tandem mass spectrometry with a triple quadrupole analyser. Briefly, Oasis® MCX cartridges were used to preconcentrate 250 mL of water samples and the reconstituted extracts were analysed with a Chirobiotic™ V under reversed mode. The effluent of a laboratory-scale aerobic granular sludge sequencing batch reactor (AGS-SBR) was used to validate the method. Linearity (r2 > 0.99), selectivity and sensitivity were achieved in the range of 20–400 ng L−1 for all enantiomers, except for norfluoxetine enantiomers which range covered 30–400 ng L−1. The method detection limits were between 0.65 and 11.5 ng L−1 and the method quantification limits were between 1.98 and 19.7 ng L−1. The identity of all enantiomers was confirmed using two MS/MS transitions and its ion ratios, according to European Commission Decision 2002/657/EC. This method was successfully applied to evaluate effluents of wastewater treatment plants (WWTP) in Portugal. Venlafaxine and fluoxetine were quantified as non-racemic mixtures (enantiomeric fraction ≠ 0.5). The enantioselective validated method was able to monitor chiral pharmaceuticals in WWTP effluents and has potential to assess the enantioselective biodegradation in bioreactors. Further application in environmental matrices as surface and estuarine waters can be exploited.