1000 resultados para Digital dividend


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In our capacity as creative artists, researchers and fine arts professors, most part of our activity focuses on the relationships between people and creativity, technology, and resources, and, most frequently, what we aim to offer are new enjoyable and subversive ways of interacting with these three fields. As art teachers, we ask 'why and how' to teach dynamic bearing in mind that digital technology will undoubtedly impact on contemporary art practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rapport de synthèse Cette thèse consiste en trois essais sur les stratégies optimales de dividendes. Chaque essai correspond à un chapitre. Les deux premiers essais ont été écrits en collaboration avec les Professeurs Hans Ulrich Gerber et Elias S. W. Shiu et ils ont été publiés; voir Gerber et al. (2006b) ainsi que Gerber et al. (2008). Le troisième essai a été écrit en collaboration avec le Professeur Hans Ulrich Gerber. Le problème des stratégies optimales de dividendes remonte à de Finetti (1957). Il se pose comme suit: considérant le surplus d'une société, déterminer la stratégie optimale de distribution des dividendes. Le critère utilisé consiste à maximiser la somme des dividendes escomptés versés aux actionnaires jusqu'à la ruine2 de la société. Depuis de Finetti (1957), le problème a pris plusieurs formes et a été résolu pour différents modèles. Dans le modèle classique de théorie de la ruine, le problème a été résolu par Gerber (1969) et plus récemment, en utilisant une autre approche, par Azcue and Muler (2005) ou Schmidli (2008). Dans le modèle classique, il y a un flux continu et constant d'entrées d'argent. Quant aux sorties d'argent, elles sont aléatoires. Elles suivent un processus à sauts, à savoir un processus de Poisson composé. Un exemple qui correspond bien à un tel modèle est la valeur du surplus d'une compagnie d'assurance pour lequel les entrées et les sorties sont respectivement les primes et les sinistres. Le premier graphique de la Figure 1 en illustre un exemple. Dans cette thèse, seules les stratégies de barrière sont considérées, c'est-à-dire quand le surplus dépasse le niveau b de la barrière, l'excédent est distribué aux actionnaires comme dividendes. Le deuxième graphique de la Figure 1 montre le même exemple du surplus quand une barrière de niveau b est introduite, et le troisième graphique de cette figure montre, quand à lui, les dividendes cumulés. Chapitre l: "Maximizing dividends without bankruptcy" Dans ce premier essai, les barrières optimales sont calculées pour différentes distributions du montant des sinistres selon deux critères: I) La barrière optimale est calculée en utilisant le critère usuel qui consiste à maximiser l'espérance des dividendes escomptés jusqu'à la ruine. II) La barrière optimale est calculée en utilisant le second critère qui consiste, quant à lui, à maximiser l'espérance de la différence entre les dividendes escomptés jusqu'à la ruine et le déficit au moment de la ruine. Cet essai est inspiré par Dickson and Waters (2004), dont l'idée est de faire supporter aux actionnaires le déficit au moment de la ruine. Ceci est d'autant plus vrai dans le cas d'une compagnie d'assurance dont la ruine doit être évitée. Dans l'exemple de la Figure 1, le déficit au moment de la ruine est noté R. Des exemples numériques nous permettent de comparer le niveau des barrières optimales dans les situations I et II. Cette idée, d'ajouter une pénalité au moment de la ruine, a été généralisée dans Gerber et al. (2006a). Chapitre 2: "Methods for estimating the optimal dividend barrier and the probability of ruin" Dans ce second essai, du fait qu'en pratique on n'a jamais toute l'information nécessaire sur la distribution du montant des sinistres, on suppose que seuls les premiers moments de cette fonction sont connus. Cet essai développe et examine des méthodes qui permettent d'approximer, dans cette situation, le niveau de la barrière optimale, selon le critère usuel (cas I ci-dessus). Les approximations "de Vylder" et "diffusion" sont expliquées et examinées: Certaines de ces approximations utilisent deux, trois ou quatre des premiers moments. Des exemples numériques nous permettent de comparer les approximations du niveau de la barrière optimale, non seulement avec les valeurs exactes mais également entre elles. Chapitre 3: "Optimal dividends with incomplete information" Dans ce troisième et dernier essai, on s'intéresse à nouveau aux méthodes d'approximation du niveau de la barrière optimale quand seuls les premiers moments de la distribution du montant des sauts sont connus. Cette fois, on considère le modèle dual. Comme pour le modèle classique, dans un sens il y a un flux continu et dans l'autre un processus à sauts. A l'inverse du modèle classique, les gains suivent un processus de Poisson composé et les pertes sont constantes et continues; voir la Figure 2. Un tel modèle conviendrait pour une caisse de pension ou une société qui se spécialise dans les découvertes ou inventions. Ainsi, tant les approximations "de Vylder" et "diffusion" que les nouvelles approximations "gamma" et "gamma process" sont expliquées et analysées. Ces nouvelles approximations semblent donner de meilleurs résultats dans certains cas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O objetivo deste trabalho foi definir a resolução espacial mais apropriada para representar a variabilidade da elevação, declividade, curvatura em perfil e índice de umidade topográfica de um terreno, por meio de avaliações com a transformada wavelet. Os dados utilizados no estudo têm sua origem em três transectos de 27 km, posicionados em áreas do Planalto, Rebordo do Planalto e Depressão Central na região central do Estado do Rio Grande do Sul. As variáveis - elevação, declividade, curvatura em perfil e índice de umidade topográfica - foram derivadas de um modelo digital de elevação Topodata com resolução de 30 m. A avaliação da resolução com a máxima variabilidade foi realizada pela aplicação da wavelet-mãe, denominada Morlet. Os resultados foram analisados a partir do isograma e do escalograma dos coeficientes wavelet e indicaram que sensores remotos com resolução espacial próxima a 32 e 40 m podem ser utilizados em pesquisas que considerem os atributos de terreno, como declividade, curvatura em perfil e índice de umidade topográfica, ou, ainda, fenômenos ambientais correlacionados a eles. No entanto, não foi possível estabelecer um valor conclusivo para a resolução espacial mais adequada para a variável elevação.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Projecte de final de carrera basat en la creació de 3 prototips (aplicació mòbil, web i software d'escriptori) per gestionar un centre d'estudis de perruqueria. El desenvolupament és sempre des del punt de vista del disseny centrat en l'usuari.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O objetivo deste trabalho foi avaliar a exatidão do cálculo da obstrução do horizonte, a partir de um modelo digital de elevação (MDE), em diferentes situações topográficas. O material utilizado incluiu um MDE disponível para a região da Serra Gaúcha, RS, receptores GPS, câmera digital, lente grande‑angular e os programas Idrisi, Arcview/ArcGIS e Solar Analyst. Foram adquiridas fotografias hemisféricas, e coletadas as coordenadas de 16 locais na área de estudo. As coordenadas e o MDE foram utilizados para calcular a obstrução do horizonte com uso do algoritmo Solar Analyst. Foram comparadas a fração aberta do céu calculada e a obtida pelas fotografias hemisféricas. O coeficiente de determinação foi de 0,8428, tendo-se observado superestimativa média de 5,53% da fração aberta do céu. Os erros são atribuídos principalmente à obstrução pela vegetação, que não pode ser identificada pelo MDE. A obstrução do horizonte, causada pelo relevo na Serra Gaúcha, pode ser calculada satisfatoriamente pelo Solar Analyst, a partir de um MDE interpolado de cartas topográficas na escala 1:50.000.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O objetivo deste trabalho foi avaliar modelos digitais de elevação (MDE), obtidos por diferentes fontes de dados, e selecionar um deles para derivar variáveis morfométricas utilizadas em mapeamento digital de solos. O trabalho foi realizado na Bacia Guapi‑Macacu, RJ. Os dados primários utilizados nos modelos gerados por interpolação (MDE‑carta e MDE‑híbrido) foram: curvas de nível, drenagem, pontos cotados e dados de sensor remoto transformados em pontos. Utilizaram-se, na comparação, modelos obtidos por sensor remoto e por aerorrestituição (MDE SRTM e MDE IBGE). Todos os modelos apresentaram resolução espacial de 30 m. A avaliação dos modelos de elevação foi baseada na análise de: atributos derivados (declividade, aspecto e curvatura); depressões espúrias; comparação entre feições derivadas a partir dos modelos e as originais, oriundas de cartas planialtimétricas; e análise das bacias de contribuição derivadas. O modelo digital de elevação híbrido apresenta qualidade superior à dos demais modelos.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O objetivo deste trabalho foi avaliar o uso da análise digital de imagens na diagnose nutricional de N no feijoeiro. Foram avaliados quatro tratamentos, em que se combinaram duas doses de N e de P aplicadas ao solo. Na emissão de vagens, determinou-se o índice de clorofila Falker, digitalizaram-se as imagens dos trifólios e determinou-se o teor foliar de N. Nas imagens, foi atribuída uma nota com o programa AFSoft, baseada na área ocupada por padrões de verde. O teor foliar de N correlacionou-se ao índice de clorofila Falker e à nota atribuída com o AFSoft, mas a correlação entre o índice de clorofila e a nota AFSoft foi superior.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this paper is to analyse gender differences in the use of ICT among university students. We present the results of a study about the uses and the perception in relation to ICT in everyday life and in academia. The study is based on a statistical simple of 1042 students from 5 different universities. The results show gender differences, both with respect to use as their perception of technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is based on the hypothesis that the use of technology to support learning is not related to whether a student belongs to the Net Generation, but that it is mainly influenced by the teaching model. The study compares behaviour and preferences towards ICT use in two groups of university students: face-to-face students and online students. A questionnaire was applied to asample of students from five universities with different characteristics (one offers online education and four offer face-to-face education with LMS teaching support).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Peer-reviewed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a method to automatically segment red blood cells (RBCs) visualized by digital holographic microscopy (DHM), which is based on the marker-controlled watershed algorithm. Quantitative phase images of RBCs can be obtained by using off-axis DHM along to provide some important information about each RBC, including size, shape, volume, hemoglobin content, etc. The most important process of segmentation based on marker-controlled watershed is to perform an accurate localization of internal and external markers. Here, we first obtain the binary image via Otsu algorithm. Then, we apply morphological operations to the binary image to get the internal markers. We then apply the distance transform algorithm combined with the watershed algorithm to generate external markers based on internal markers. Finally, combining the internal and external markers, we modify the original gradient image and apply the watershed algorithm. By appropriately identifying the internal and external markers, the problems of oversegmentation and undersegmentation are avoided. Furthermore, the internal and external parts of the RBCs phase image can also be segmented by using the marker-controlled watershed combined with our method, which can identify the internal and external markers appropriately. Our experimental results show that the proposed method achieves good performance in terms of segmenting RBCs and could thus be helpful when combined with an automated classification of RBCs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a probabilistic approach to model the problem of power supply voltage fluctuations. Error probability calculations are shown for some 90-nm technology digital circuits.The analysis here considered gives the timing violation error probability as a new design quality factor in front of conventional techniques that assume the full perfection of the circuit. The evaluation of the error bound can be useful for new design paradigms where retry and self-recoveringtechniques are being applied to the design of high performance processors. The method here described allows to evaluate the performance of these techniques by means of calculating the expected error probability in terms of power supply distribution quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper deals with the goodness of the Gaussian assumption when designing second-order blind estimationmethods in the context of digital communications. The low- andhigh-signal-to-noise ratio (SNR) asymptotic performance of the maximum likelihood estimator—derived assuming Gaussiantransmitted symbols—is compared with the performance of the optimal second-order estimator, which exploits the actualdistribution of the discrete constellation. The asymptotic study concludes that the Gaussian assumption leads to the optimalsecond-order solution if the SNR is very low or if the symbols belong to a multilevel constellation such as quadrature-amplitudemodulation (QAM) or amplitude-phase-shift keying (APSK). On the other hand, the Gaussian assumption can yield importantlosses at high SNR if the transmitted symbols are drawn from a constant modulus constellation such as phase-shift keying (PSK)or continuous-phase modulations (CPM). These conclusions are illustrated for the problem of direction-of-arrival (DOA) estimation of multiple digitally-modulated signals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A general criterion for the design of adaptive systemsin digital communications called the statistical reference criterionis proposed. The criterion is based on imposition of the probabilitydensity function of the signal of interest at the outputof the adaptive system, with its application to the scenario ofhighly powerful interferers being the main focus of this paper.The knowledge of the pdf of the wanted signal is used as adiscriminator between signals so that interferers with differingdistributions are rejected by the algorithm. Its performance isstudied over a range of scenarios. Equations for gradient-basedcoefficient updates are derived, and the relationship with otherexisting algorithms like the minimum variance and the Wienercriterion are examined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this work was to select an appropriate digital filter for a servo application and to filter the noise from the measurement devices. Low pass filter attenuates the high frequency noise beyond the specified cut-off frequency. Digital lowpass filters in both IIR and FIR responses were designed and experimentally compared to understand their characteristics from the corresponding step responses of the system. Kaiser Windowing and Equiripple methods were selected for FIR response, whereas Butterworth, Chebyshev, InverseChebyshev and Elliptic methods were designed for IIR case. Limitations in digital filter design for a servo system were analysed. Especially the dynamic influences of each designed filter on the control stabilityof the electrical servo drive were observed. The criterion for the selection ofparameters in designing digital filters for servo systems was studied. Control system dynamics was given significant importance and the use of FIR and IIR responses in different situations were compared to justify the selection of suitableresponse in each case. The software used in the filter design was MatLab/Simulink® and dSPACE's DSP application. A speed controlled Permanent Magnet Linear synchronous Motor was used in the experimental work.