932 resultados para combinatorial protocol in multiple linear regressions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A recent trend in distributed computer-controlled systems (DCCS) is to interconnect the distributed computing elements by means of multi-point broadcast networks. Since the network medium is shared between a number of network nodes, access contention exists and must be solved by a medium access control (MAC) protocol. Usually, DCCS impose real-time constraints. In essence, by real-time constraints we mean that traffic must be sent and received within a bounded interval, otherwise a timing fault is said to occur. This motivates the use of communication networks with a MAC protocol that guarantees bounded access and response times to message requests. PROFIBUS is a communication network in which the MAC protocol is based on a simplified version of the timed-token protocol. In this paper we address the cycle time properties of the PROFIBUS MAC protocol, since the knowledge of these properties is of paramount importance for guaranteeing the real-time behaviour of a distributed computer-controlled system which is supported by this type of network.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Distributed Computer-Controlled Systems (DCCS), a special emphasis must be given to the communication infrastructure, which must provide timely and reliable communication services. CAN networks are usually suitable to support small-scale DCCS. However, they are known to present some reliability problems, which can lead to an unreliable behaviour of the supported applications. In this paper, an atomic multicast protocol for CAN networks is proposed. This protocol explores the CAN synchronous properties, providing a timely and reliable service to the supported applications. The implementation of such protocol in Ada, on top of the Ada version of Real-Time Linux is presented, which is used to demonstrate the advantages and disadvantages of the platform to support reliable communications in DCCS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper provides a comprehensive study on how to use Profibus networks to support real time communications, that is, ensuring the transmission of the real time messages before their deadlines. Profibus is based on a simplified Timed Token (TT) protocol, which is a well proved solution for real time communication systems. However, Profibus differences from the TT protocol prevent the application of the usual TT analysis. The main reason is that, conversely to the TT protocol, in the worst case, only one high priority message is processed per token visit. The major contribution of the paper is to prove that, despite this shortcoming, it is possible to guarantee communication real time behaviour with the Profibus protocol

Relevância:

100.00% 100.00%

Publicador:

Resumo:

WiDom is a previously proposed prioritized medium access control protocol for wireless channels. We present a modification to this protocol in order to improve its reliability. This modification has similarities with cooperative relaying schemes, but, in our protocol, all nodes can relay a carrier wave. The preliminary evaluation shows that, under transmission errors, a significant reduction on the number of failed tournaments can be achieved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para a obtenção do grau de Mestre em Engenharia Mecânica

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de Mestrado apresentado ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Contabilidade e Finanças, sob orientação de Adalmiro Álvaro Malheiro de Castro Andrade Pereira

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de Mestrado apresentada ao Instituto Superior de Contabilidade e Administração do Porto para a obtenção do Grau de Mestre em Auditoria, sob a orientação de Mestre Adalmiro Álvaro Malheiro de Castro Andrade Pereira

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de Mestrado apresentada ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Contabilidade e Finanças, sob orientação do Dr. Luís Pereira Gomes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Com este trabalho pretende-se efetuar o levantamento e análise dos fatores que estão na base da volatilidade do preço da energia elétrica no mercado ibérico de energia. Posteriormente à definição dos potenciais métodos utilizados na previsão do preço da energia elétrica, é desenvolvido um modelo capaz de prever os preços do mercado de energia para um horizonte de vários períodos temporais (trimestral, mensal, semanal e diário). Por fim são comparados os resultados dos modelos aplicados, tendo como base a análise qualitativa e quantitativa da evolução das respetivas previsões, bem como a análise estatística obtida em cada um deles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Durante as últimas décadas observou-se o crescimento da importância das avaliações fornecidas pelas agências de rating, sendo este um fator decisivo na tomada de decisão dos investidores. Também os emitentes de dívida são largamente afetados pelas alterações das classificações atribuídas por estas agências. Esta investigação pretende, por um lado, compreender se estas agências têm poder para conseguirem influenciar a evolução da dívida pública e qual o seu papel no mercado financeiro. Por outro, pretende compreender quais os fatores determinantes da dívida pública portuguesa, bem como a realização de uma análise por percentis com o objetivo de lhe atribuir um rating. Para a análise dos fatores que poderão influenciar a dívida pública, a metodologia utilizada é uma regressão linear múltipla estimada através do Método dos Mínimos Quadrados (Ordinary Least Squares – OLS), em que num cenário inicial era composta por onze variáveis independentes, sendo a dívida pública a variável dependente, para um período compreendido entre 1996 e 2013. Foram realizados vários testes ao modelo inicial, com o objetivo de encontrar um modelo que fosse o mais explicativo possível. Conseguimos ainda identificar uma relação inversa entre o rating atribuído por estas agências e a evolução da dívida pública, no sentido em que para períodos em que o rating desce, o crescimento da dívida é mais acentuado. Não nos foi, no entanto, possível atribuir um rating à dívida pública através de uma análise de percentis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: A growing body of evidence shows the prognostic value of oxygen uptake efficiency slope (OUES), a cardiopulmonary exercise test (CPET) parameter derived from the logarithmic relationship between O(2) consumption (VO(2)) and minute ventilation (VE) in patients with chronic heart failure (CHF). OBJECTIVE: To evaluate the prognostic value of a new CPET parameter - peak oxygen uptake efficiency (POUE) - and to compare it with OUES in patients with CHF. METHODS: We prospectively studied 206 consecutive patients with stable CHF due to dilated cardiomyopathy - 153 male, aged 53.3±13.0 years, 35.4% of ischemic etiology, left ventricular ejection fraction 27.7±8.0%, 81.1% in sinus rhythm, 97.1% receiving ACE-Is or ARBs, 78.2% beta-blockers and 60.2% spironolactone - who performed a first maximal symptom-limited treadmill CPET, using the modified Bruce protocol. In 33% of patients an cardioverter-defibrillator (ICD) or cardiac resynchronization therapy device (CRT-D) was implanted during follow-up. Peak VO(2), percentage of predicted peak VO(2), VE/VCO(2) slope, OUES and POUE were analyzed. OUES was calculated using the formula VO(2) (l/min) = OUES (log(10)VE) + b. POUE was calculated as pVO(2) (l/min) / log(10)peakVE (l/min). Correlation coefficients between the studied parameters were obtained. The prognosis of each variable adjusted for age was evaluated through Cox proportional hazard models and R2 percent (R2%) and V index (V6) were used as measures of the predictive accuracy of events of each of these variables. Receiver operating characteristic (ROC) curves from logistic regression models were used to determine the cut-offs for OUES and POUE. RESULTS: pVO(2): 20.5±5.9; percentage of predicted peak VO(2): 68.6±18.2; VE/VCO(2) slope: 30.6±8.3; OUES: 1.85±0.61; POUE: 0.88±0.27. During a mean follow-up of 33.1±14.8 months, 45 (21.8%) patients died, 10 (4.9%) underwent urgent heart transplantation and in three patients (1.5%) a left ventricular assist device was implanted. All variables proved to be independent predictors of this combined event; however, VE/VCO2 slope was most strongly associated with events (HR 11.14). In this population, POUE was associated with a higher risk of events than OUES (HR 9.61 vs. 7.01), and was also a better predictor of events (R2: 28.91 vs. 22.37). CONCLUSION: POUE was more strongly associated with death, urgent heart transplantation and implantation of a left ventricular assist device and proved to be a better predictor of events than OUES. These results suggest that this new parameter can increase the prognostic value of CPET in patients with CHF.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Container Loading Problem (CLP) literature has traditionally evaluated the dynamic stability of cargo by applying two metrics to box arrangements: the mean number of boxes supporting the items excluding those placed directly on the floor (M1) and the percentage of boxes with insufficient lateral support (M2). However, these metrics, that aim to be proxies for cargo stability during transportation, fail to translate real-world cargo conditions of dynamic stability. In this paper two new performance indicators are proposed to evaluate the dynamic stability of cargo arrangements: the number of fallen boxes (NFB) and the number of boxes within the Damage Boundary Curve fragility test (NB_DBC). Using 1500 solutions for well-known problem instances found in the literature, these new performance indicators are evaluated using a physics simulation tool (StableCargo), replacing the real-world transportation by a truck with a simulation of the dynamic behaviour of container loading arrangements. Two new dynamic stability metrics that can be integrated within any container loading algorithm are also proposed. The metrics are analytical models of the proposed stability performance indicators, computed by multiple linear regression. Pearson’s r correlation coefficient was used as an evaluation parameter for the performance of the models. The extensive computational results show that the proposed metrics are better proxies for dynamic stability in the CLP than the previous widely used metrics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RESUMO:O objectivo deste estudo foi analisar a relação entre mobilidade funcional, risco de queda, nível de actividade física e percepção de saúde de 34 indivíduos praticantes (n=18) e não praticantes (n=16) de exercício físico duas ou mais vezes por semana durante pelo menos 45 minutos, residentes na comunidade e ambulatórios. Tipo de estudo: transversal exploratório-descritivo. Metodologia: foi feito um levantamento das variáveis de caracterização (idade, sexo, habilitações literárias, situação sócio-económica, situação familiar) e do estado cognitivo e estado emocional/depressão (Mini-Mental State Examination e Geriatric Depression Scale). As variáveis em análise foram: a mobilidade funcional avaliada através do Timed up and GoTest, o risco de queda medido com o Funtional Reach Test, o nível de actividade física avaliado através do Questionário Internacional de Actividade Física (IPAQ) e a percepção de saúde medida através do SF-6D. Foi também questionada a prática de alguma modalidade de exercício físico, da sua frequência e duração. Os dados foram analisados através de estatística descritiva, foi realizada uma regressão linear múltipla e uma análise bivariada das correlações, utilizando o coeficiente de correlação linear de Pearson (p ≤ 0,05).Resultados: verificou-se que, na amostra global, a maioria dos indivíduos apresentou uma mobilidade funcional considerada normal (TUG<10 segundos), e um risco de queda moderado (FRT entre 15,24 e 25,40 cm), embora sem diferenças entre os grupos em análise. A actividade física apresentou uma duração média de 685,88±540,16 minutos por semana, sendo que 18 indivíduos praticavam exercício físico pelo menos 45 minutos e duas ou mais vezes por semana.A percepção do estado de saúde foi bastante satisfatória, sendo a pontuação média do SF-6D de 0,915±0,067. A análise entre grupos demonstrou que o grupo que praticava exercício físico apresentava um maior número de indivíduos na faixa etária dos 65-74 anos, tinha mais escolarização e melhor estado cognitivo. Estes indivíduos eram fisicamente mais activos e faziam-o, na sua maioria, com uma frequência bissemanal, apenas um desempenhando uma modalidade de intensidade vigorosa. A análise estatística demonstrou que: a mobilidade funcional e o risco de queda eram mais desfavoráveis nos indivíduos com mais idade; o estado cognitivo estava associado a maior mobilidade funcional; uma boa mobilidade funcional correspondeu a um risco de queda reduzido, a mais prática de actividade física, a melhor percepção do estado de saúde e a manutenção do estado cognitivo. Os indivíduos com menor risco de queda apresentaram melhor estado cognitivo e emocional. E este último correspondeu a uma melhor percepção do estado de saúde e a um melhor estado cognitivo. Conclusão: a manutenção da mobilidade funcional reduz o risco de queda aumenta a prática de actividade física e melhora a percepção de saúde de indivíduos com 65 ou mais anos residentes na comunidade.--------- ABSTRACT: Objective: the aim of this study was to analyze the relationship between functional mobility, falls risk, level of physical activity and health perception in a sample of 34 subjects, 18 that practice exercise two or more times a week for at least 45 minutes and 16 that don’t practice exercise, residents and community. Designs: cross-sectional exploratory-descriptive survey. Methods: descriptive variables are age, sex, education, socio-economic level, family status, cognitive status (Mini-Mental State Examination) and emotional status/depression (Geriatric Depression Scale). We analyze the functional mobility with the Timed up and Go Test, the falls risk with Functional Reach Test, the level of physical activity with the International Physical Activity Questionnaire (IPAQ) and health perception with SF-6D. We also questioned the practice of exercise, their frequency and duration. Data were analyzed using descriptive statistics, a multiple linear regression analysis and bivariate correlations, using the linear correlation coefficient of Pearson (p ≤ 0.05). Results: we found that, in the total sample, most individuals had considered a normal functional mobility (TUG <10 seconds), and a moderate falls risk (FRT between 15.24 and 25.40 cm), but no difference between groups. Physical activity showed an mean of 685.88 ± 540.16 minutes per week, with 18 individuals pratice physical exercise at least 45 minutes and two or more times per week. The mean score of the SF-6D was 0.915 ± 0.067 and the perception of health was satisfactory. The analysis between groups showed that the group that practice physical exercise had a greater number of individuals aged 65-74 years, had more schooling and better cognitive status. These subjects were more physically active and mostly did it two times a weak and only one playing a kind of vigorous intensity. The multiple linear regression and correlations, using the linear correlation coefficient of Pearson (p≤0.05) showed that: functional mobility and fall risk decrease with age increase. The cognitive status was associated with greater functional mobility, a good functional mobility corresponded to a reduced falls risk, more physical activity, a better perception of health status and maintenance of cognitive status. Subjects with lower falls risk had better cognitive and emotional state. And subjects with a better emotional status have a better health perception and better cognitive status. Conclusion: the maintenance of functional mobility reduces falls risk, increase physical activity and improves health perception of individuals with 65 years or older living in the community.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A presente dissertação tem como objetivo analisar se existe relação entre a manipulação de resultados e a qualidade da auditoria, baseado no estudo do comportamento de determinados “accruals” nas empresas portuguesas não cotadas. Nos diversos estudos existentes sobre o tema “Relação da Qualidade da Auditoria e Manipulação de Resultados”, surgem abordados muitos aspetos, nomeadamente no que respeita às motivações, às formas de manipulação e métodos de deteção que se verifica no campo da auditoria e, este trabalho, pretende abordar se o processo da auditoria é, ou não, eficaz na deteção destas práticas efetuadas pelos gestores, pois isso influencia a confiança naqueles que utilizam a informação financeira. Desta forma, o trabalho pretende basear-se nestas abordagens e complementar visões e conclusões. Neste âmbito, surgem perspetivas e informações que alertam para comportamentos de risco, assim como a sua origem, ou seja, as motivações que provocam esta prática, tanto por parte dos gestores como dos administradores. É nesta perspetiva que este trabalho se enquadra, numa sociedade contemporânea que continuadamente dá exemplos reais e concretos destas práticas. Um ponto é comum, que é o facto de a manipulação dos resultados surgir principalmente pelo motivo dos interesses e motivações por parte dos gestores em conseguirem benefícios. Na tese são abordados os incentivos que levam à manipulação no contexto português, que parecem estar relacionados com o contexto económico e fiscal, onde é desenvolvida a atividade dos agentes económicos. Outra abordagem importante no trabalho é a referência às principais metodologias de detenção da manipulação de resultados, nomeadamente os modelos baseados nos accruals e na distribuição de resultados. O modelo empírico deste estudo consiste numa regressão linear múltipla, com o objetivo de explicar a relação, entre a variável accruals discricionários e as variáveis Big4, a dimensão da empresa, o endividamento, o volume de negócios e a rendibilidade. Para complementar este estudo a análise empírica incidiu sobre 4723 empresas portuguesas não cotadas, a amostra usada foi baseada na base de dados SABI, para um período de análise entre 2011 a 2013. Os resultados encontrados sugerem que existe relação entre a qualidade da auditoria e a manipulação dos resultados concluindo que as empresas auditadas pelas Big4 apresentam accruals discricionários inferiores às restantes empresas.