986 resultados para Non Performing Assets


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In embedded systems, the timing behaviour of the control mechanisms are sometimes of critical importance for the operational safety. These high criticality systems require strict compliance with the offline predicted task execution time. The execution of a task when subject to preemption may vary significantly in comparison to its non-preemptive execution. Hence, when preemptive scheduling is required to operate the workload, preemption delay estimation is of paramount importance. In this paper a preemption delay estimation method for floating non-preemptive scheduling policies is presented. This work builds on [1], extending the model and optimising it considerably. The preemption delay function is subject to a major tightness improvement, considering the WCET analysis context. Moreover more information is provided as well in the form of an extrinsic cache misses function, which enables the method to provide a solution in situations where the non-preemptive regions sizes are small. Finally experimental results from the implementation of the proposed solutions in Heptane are provided for real benchmarks which validate the significance of this work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Não existe uma definição única de processo de memória de longo prazo. Esse processo é geralmente definido como uma série que possui um correlograma decaindo lentamente ou um espectro infinito de frequência zero. Também se refere que uma série com tal propriedade é caracterizada pela dependência a longo prazo e por não periódicos ciclos longos, ou que essa característica descreve a estrutura de correlação de uma série de longos desfasamentos ou que é convencionalmente expressa em termos do declínio da lei-potência da função auto-covariância. O interesse crescente da investigação internacional no aprofundamento do tema é justificado pela procura de um melhor entendimento da natureza dinâmica das séries temporais dos preços dos ativos financeiros. Em primeiro lugar, a falta de consistência entre os resultados reclama novos estudos e a utilização de várias metodologias complementares. Em segundo lugar, a confirmação de processos de memória longa tem implicações relevantes ao nível da (1) modelação teórica e econométrica (i.e., dos modelos martingale de preços e das regras técnicas de negociação), (2) dos testes estatísticos aos modelos de equilíbrio e avaliação, (3) das decisões ótimas de consumo / poupança e de portefólio e (4) da medição de eficiência e racionalidade. Em terceiro lugar, ainda permanecem questões científicas empíricas sobre a identificação do modelo geral teórico de mercado mais adequado para modelar a difusão das séries. Em quarto lugar, aos reguladores e gestores de risco importa saber se existem mercados persistentes e, por isso, ineficientes, que, portanto, possam produzir retornos anormais. O objetivo do trabalho de investigação da dissertação é duplo. Por um lado, pretende proporcionar conhecimento adicional para o debate da memória de longo prazo, debruçando-se sobre o comportamento das séries diárias de retornos dos principais índices acionistas da EURONEXT. Por outro lado, pretende contribuir para o aperfeiçoamento do capital asset pricing model CAPM, considerando uma medida de risco alternativa capaz de ultrapassar os constrangimentos da hipótese de mercado eficiente EMH na presença de séries financeiras com processos sem incrementos independentes e identicamente distribuídos (i.i.d.). O estudo empírico indica a possibilidade de utilização alternativa das obrigações do tesouro (OT’s) com maturidade de longo prazo no cálculo dos retornos do mercado, dado que o seu comportamento nos mercados de dívida soberana reflete a confiança dos investidores nas condições financeiras dos Estados e mede a forma como avaliam as respetiva economias com base no desempenho da generalidade dos seus ativos. Embora o modelo de difusão de preços definido pelo movimento Browniano geométrico gBm alegue proporcionar um bom ajustamento das séries temporais financeiras, os seus pressupostos de normalidade, estacionariedade e independência das inovações residuais são adulterados pelos dados empíricos analisados. Por isso, na procura de evidências sobre a propriedade de memória longa nos mercados recorre-se à rescaled-range analysis R/S e à detrended fluctuation analysis DFA, sob abordagem do movimento Browniano fracionário fBm, para estimar o expoente Hurst H em relação às séries de dados completas e para calcular o expoente Hurst “local” H t em janelas móveis. Complementarmente, são realizados testes estatísticos de hipóteses através do rescaled-range tests R/S , do modified rescaled-range test M - R/S e do fractional differencing test GPH. Em termos de uma conclusão única a partir de todos os métodos sobre a natureza da dependência para o mercado acionista em geral, os resultados empíricos são inconclusivos. Isso quer dizer que o grau de memória de longo prazo e, assim, qualquer classificação, depende de cada mercado particular. No entanto, os resultados gerais maioritariamente positivos suportam a presença de memória longa, sob a forma de persistência, nos retornos acionistas da Bélgica, Holanda e Portugal. Isto sugere que estes mercados estão mais sujeitos a maior previsibilidade (“efeito José”), mas também a tendências que podem ser inesperadamente interrompidas por descontinuidades (“efeito Noé”), e, por isso, tendem a ser mais arriscados para negociar. Apesar da evidência de dinâmica fractal ter suporte estatístico fraco, em sintonia com a maior parte dos estudos internacionais, refuta a hipótese de passeio aleatório com incrementos i.i.d., que é a base da EMH na sua forma fraca. Atendendo a isso, propõem-se contributos para aperfeiçoamento do CAPM, através da proposta de uma nova fractal capital market line FCML e de uma nova fractal security market line FSML. A nova proposta sugere que o elemento de risco (para o mercado e para um ativo) seja dado pelo expoente H de Hurst para desfasamentos de longo prazo dos retornos acionistas. O expoente H mede o grau de memória de longo prazo nos índices acionistas, quer quando as séries de retornos seguem um processo i.i.d. não correlacionado, descrito pelo gBm(em que H = 0,5 , confirmando- se a EMH e adequando-se o CAPM), quer quando seguem um processo com dependência estatística, descrito pelo fBm(em que H é diferente de 0,5, rejeitando-se a EMH e desadequando-se o CAPM). A vantagem da FCML e da FSML é que a medida de memória de longo prazo, definida por H, é a referência adequada para traduzir o risco em modelos que possam ser aplicados a séries de dados que sigam processos i.i.d. e processos com dependência não linear. Então, estas formulações contemplam a EMH como um caso particular possível.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider a distributed computer system comprising many computer nodes, each interconnected with a controller area network (CAN) bus. We prove that if priorities to message streams are assigned using rate-monotonic (RM) and if the requested capacity of the CAN bus does not exceed 25% then all deadlines are met.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

COPD is a major cause of morbidity and mortality worldwide, representing a major public health problem due to the high health and economic resource consumption. Pulmonary rehabilitation is a standard care recommendation for these patients, in order to control the symptoms and optimize the functional capacity, reducing health care costs associated with exacerbations and activity limitations and participation. However, in patients with severe COPD exercise performance can be difficult, due to extreme dyspnea, decreased muscle strength and fatigue. In addition, hypoxemia and dyspnea during efforts and daily activities may occur, limiting their quality of life. Thus, NIV have been used as adjunct to exercise, in order to improve exercise capacity in these patients. However, there is no consensus for this technique recommendation. Our objective was to verify whether the use of NIV during exercise is effective than exercise without NIV in dyspnea, walked distance, blood gases and health status in COPD patients, through a systematic review and meta-analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE To estimate rates of non-adherence to telemedicine strategies aimed at treating drug addiction. METHODS A systematic review was conducted of randomized controlled trials investigating different telemedicine treatment methods for drug addiction. The following databases were consulted between May 18, 2012 and June 21, 2012: PubMed, PsycINFO, SciELO, Wiley (The Cochrane Library), Embase, Clinical trials and Google Scholar. The Grading of Recommendations Assessment, Development and Evaluation was used to evaluate the quality of the studies. The criteria evaluated were: appropriate sequence of data generation, allocation concealment, blinding, description of losses and exclusions and analysis by intention to treat. There were 274 studies selected, of which 20 were analyzed. RESULTS Non-adherence rates varied between 15.0% and 70.0%. The interventions evaluated were of at least three months duration and, although they all used telemedicine as support, treatment methods differed. Regarding the quality of the studies, the values also varied from very poor to high quality. High quality studies showed better adherence rates, as did those using more than one technique of intervention and a limited treatment time. Mono-user studies showed better adherence rates than poly-user studies. CONCLUSIONS Rates of non-adherence to treatment involving telemedicine on the part of users of psycho-active substances differed considerably, depending on the country, the intervention method, follow-up time and substances used. Using more than one technique of intervention, short duration of treatment and the type of substance used by patients appear to facilitate adherence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the use of multidimensional scaling in the evaluation of controller performance. Several nonlinear systems are analyzed based on the closed loop time response under the action of a reference step input signal. Three alternative performance indices, based on the time response, Fourier analysis, and mutual information, are tested. The numerical experiments demonstrate the feasibility of the proposed methodology and motivate its extension for other performance measures and new classes of nonlinearities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The urgent need to mitigate traffic problems such as accidents, road hazards, pollution and traffic jam have strongly driven the development of vehicular communications. DSRC (Dedicated Short Range Communications) is the technology of choice in vehicular communications, enabling real time information exchange among vehicles V2V (Vehicle-to-Vehicle) and between vehicles and infrastructure V2I (Vehicle-Infrastructure). This paper presents a receiving antenna for a single lane DSRC control unit. The antenna is a non-uniform array with five microstrip patches. The obtained beam width, bandwidth and circular polarization quality, among other characteristics, are compatible with the DSRC standards, making this antenna suitable for this application. © 2014 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wireless communications are widely used for various applications, requiring antennas with different features. Often, to achieve the desired radiation pattern, is necessary to employ antenna arrays, using non-uniform excitation on its elements. Power dividers can be used and the best known are the T-junction and the Wilkinson power divider, whose main advantage is the isolation between output ports. In this paper the impact of this isolation on the overall performance of a circularly polarized planar antenna array using non-uniform excitation is investigated. Results show a huge decrease of the array bandwidths either in terms of return loss or in polarization, without resistors. © 2014 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increasing integration of larger amounts of wind energy into power systems raises important operational issues, such as the balance between power generation and demand. The pumped storage hydro (PSH) units are one possible solution to mitigate this problem, once they can store the excess of energy in the periods of higher generation and lower demand. However, the behaviour of a PSH unit may differ considerably from the expected in terms of wind power integration when it operates in a liberalized electricity market under a price-maker context. In this regard, this paper models and computes the optimal PSH weekly scheduling in a price-taker and price-maker scenarios, either when the PSH unit operates in standalone and integrated in a portfolio of other generation assets. Results show that the price-maker standalone PSH will integrate less wind power in comparison with the price-taker situation. Moreover, when the PSH unit is integrated in a portfolio with a base load power plant, the role of the price elasticity of demand may completely change the operational profile of the PSH unit. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A detailed analysis of fabrics of the chilled margin of a thick dolerite dyke (Foum Zguid dyke, Southern Morocco) was performed in order to better understand the development of sub-fabrics during dyke emplacement and cooling. AMS data were complemented with measurements of paramagnetic and ferrimagnetic fabrics (measured with high field torque magnetometer), neutron texture and microstructural analyses. The ferrimagnetic and AMS fabrics are similar, indicating that the ferrimagnetic minerals dominate the AMS signal. The paramagnetic fabric is different from the previous ones. Based on the crystallization timing of the different mineralogical phases, the paramagnetic fabric appears related to the upward flow, while the ferrimagnetic fabric rather reflects the late-stage of dyke emplacement and cooling stresses. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We analyse the possibility that, in two Higgs doublet models, one or more of the Higgs couplings to fermions or to gauge bosons change sign, relative to the respective Higgs Standard Model couplings. Possible sign changes in the coupling of a neutral scalar to charged ones are also discussed. These wrong signs can have important physical consequences, manifesting themselves in Higgs production via gluon fusion or Higgs decay into two gluons or into two photons. We consider all possible wrong sign scenarios, and also the symmetric limit, in all possible Yukawa implementations of the two Higgs doublet model, in two different possibilities: the observed Higgs boson is the lightest CP-even scalar, or the heaviest one. We also analyse thoroughly the impact of the currently available LHC data on such scenarios. With all 8 TeV data analysed, all wrong sign scenarios are allowed in all Yukawa types, even at the 1 sigma level. However, we will show that B-physics constraints are crucial in excluding the possibility of wrong sign scenarios in the case where tan beta is below 1. We will also discuss the future prospects for probing the wrong sign scenarios at the next LHC run. Finally we will present a scenario where the alignment limit could be excluded due to non-decoupling in the case where the heavy CP-even Higgs is the one discovered at the LHC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We introduce the notions of equilibrium distribution and time of convergence in discrete non-autonomous graphs. Under some conditions we give an estimate to the convergence time to the equilibrium distribution using the second largest eigenvalue of some matrices associated with the system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Applied Mathematical Modelling, Vol.33

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the field of appearance-based robot localization, the mainstream approach uses a quantized representation of local image features. An alternative strategy is the exploitation of raw feature descriptors, thus avoiding approximations due to quantization. In this work, the quantized and non-quantized representations are compared with respect to their discriminativity, in the context of the robot global localization problem. Having demonstrated the advantages of the non-quantized representation, the paper proposes mechanisms to reduce the computational burden this approach would carry, when applied in its simplest form. This reduction is achieved through a hierarchical strategy which gradually discards candidate locations and by exploring two simplifying assumptions about the training data. The potential of the non-quantized representation is exploited by resorting to the entropy-discriminativity relation. The idea behind this approach is that the non-quantized representation facilitates the assessment of the distinctiveness of features, through the entropy measure. Building on this finding, the robustness of the localization system is enhanced by modulating the importance of features according to the entropy measure. Experimental results support the effectiveness of this approach, as well as the validity of the proposed computation reduction methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we consider a Cournot competition between a nonprofit firm and a for-profit firm in a homogeneous goods market, with uncertain demand. Given an asymmetric tax schedule, we compute explicitly the Bayesian-Nash equilibrium. Furthermore, we analyze the effects of the tax rate and the degree of altruistic preference on market equilibrium outcomes.