951 resultados para local iterated function systems
Resumo:
When operated with a metallic tip and sample the scanning tunnelling microscope constitutes a nanoscale, plasmonic light source yielding broadband emission up to a photon energy determined by the applied bias. The emission is due to tunnelling electron excitation and subsequent radiative decay of localized plasmon modes, which can be on the lateral scale of a single metal grain (similar to 25 nm) or less. For a Au-tip/Au-polycrystalline sample under ambient conditions it is found that the intensity and spectral content of the emitted light are not dependent on the lateral grain dimension, but are predominantly determined by the tip geometry. However, the intensity increases strongly with increasing film thickness (grain depth) up to 20-25 nm or approximately the skin depth of the Au film. Photon maps can show less emissive grains and two classes of this occurrence are distinguished. The first is geometrical in origin - a double-tip structure in this case - while the second is due to a contamination-induced lowering of the local work function that causes the tunnel gap to increase. It is suggested that differences in work-function lowering between grains presenting different crystalline facets, combined with an exponential decay in emitted light intensity with tip - sample distance, leads to grain contrast. These results are relevant to tip-enhanced Raman scattering and the fabrication of micro/nano-scale planar, light-emitting tunnel devices.
Resumo:
Esta tese apresenta um sistema de localização baseado exclusivamente em ultrassons, não necessitando de recorrer a qualquer outra tecnologia. Este sistema de localização foi concebido para poder operar em ambientes onde qualquer outra tecnologia não pode ser utilizada ou o seu uso está condicionado, como são exemplo aplicações subaquáticas ou ambientes hospitalares. O sistema de localização proposto faz uso de uma rede de faróis fixos permitindo que estações móveis se localizem. Devido à necessidade de transmissão de dados e medição de distâncias foi desenvolvido um pulso de ultrassons robusto a ecos que permite realizar ambas as tarefas com sucesso. O sistema de localização permite que as estações móveis se localizem escutando apenas a informação em pulsos de ultrassons enviados pelos faróis usando para tal um algoritmo baseado em diferenças de tempo de chegada. Desta forma a privacidade dos utilizadores é garantida e o sistema torna-se completamente independente do número de utilizadores. Por forma a facilitar a implementação da rede de faróis apenas será necessário determinar manualmente a posição de alguns dos faróis, designados por faróis âncora. Estes irão permitir que os restantes faróis, completamente autónomos, se possam localizar através de um algoritmo iterativo de localização baseado na minimização de uma função de custo. Para que este sistema possa funcionar como previsto será necessário que os faróis possam sincronizar os seus relógios e medir a distância entre eles. Para tal, esta tese propõe um protocolo de sincronização de relógio que permite também obter as medidas de distância entre os faróis trocando somente três mensagens de ultrassons. Adicionalmente, o sistema de localização permite que faróis danificados possam ser substituídos sem comprometer a operabilidade da rede reduzindo a complexidade na manutenção. Para além do mencionado, foi igualmente implementado um simulador de ultrassons para ambientes fechados, o qual provou ser bastante preciso e uma ferramenta de elevado valor para simular o comportamento do sistema de localização sobre condições controladas.
Resumo:
In this article we provide brief descriptions of three classes of schedulers: Operating Systems Process Schedulers, Cluster Systems, Jobs Schedulers and Big Data Schedulers. We describe their evolution from early adoptions to modern implementations, considering both the use and features of algorithms. In summary, we discuss differences between all presented classes of schedulers and discuss their chronological development. In conclusion, we highlight similarities in the focus of scheduling strategies design, applicable to both local and distributed systems.
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Civil na Área de Edificações
Resumo:
Since the arrival of several new antivirals and due to the growing molecular and clinical knowledge of hepatitis B virus (HBV) infection, therapy of hepatitis B has become complex. Clinical guidelines aim at streamlining medical attitudes: in this respect, the European Association for the Study of the Liver (EASL) recently issued clinical practice guidelines for the management of chronic hepatitis B. Guidelines made by international experts need however to be adapted to local health care systems. Here, we summarise the EASL guidelines with some minor modifications in order to be compatible with the particular Swiss situation, while discussing in more detail some aspects. Chronic hepatitis B is a complex disease with several phases where host and viral factors interact: the features of this continuous interplay need to be evaluated when choosing the most appropriate treatment. The EASL guidelines recommend, as first-line agents, using the most potent antivirals available with the optimal resistance profile, in order to abate HBV DNA as rapidly and as sustainably as possible. Once therapy has been started, the infection evolves and resistant viral strains may emerge. Rescue therapy needs to be started early with more potent agents lacking cross-resistance.
Resumo:
Cette thèse de doctorat consiste en trois chapitres qui traitent des sujets de choix de portefeuilles de grande taille, et de mesure de risque. Le premier chapitre traite du problème d’erreur d’estimation dans les portefeuilles de grande taille, et utilise le cadre d'analyse moyenne-variance. Le second chapitre explore l'importance du risque de devise pour les portefeuilles d'actifs domestiques, et étudie les liens entre la stabilité des poids de portefeuille de grande taille et le risque de devise. Pour finir, sous l'hypothèse que le preneur de décision est pessimiste, le troisième chapitre dérive la prime de risque, une mesure du pessimisme, et propose une méthodologie pour estimer les mesures dérivées. Le premier chapitre améliore le choix optimal de portefeuille dans le cadre du principe moyenne-variance de Markowitz (1952). Ceci est motivé par les résultats très décevants obtenus, lorsque la moyenne et la variance sont remplacées par leurs estimations empiriques. Ce problème est amplifié lorsque le nombre d’actifs est grand et que la matrice de covariance empirique est singulière ou presque singulière. Dans ce chapitre, nous examinons quatre techniques de régularisation pour stabiliser l’inverse de la matrice de covariance: le ridge, spectral cut-off, Landweber-Fridman et LARS Lasso. Ces méthodes font chacune intervenir un paramètre d’ajustement, qui doit être sélectionné. La contribution principale de cette partie, est de dériver une méthode basée uniquement sur les données pour sélectionner le paramètre de régularisation de manière optimale, i.e. pour minimiser la perte espérée d’utilité. Précisément, un critère de validation croisée qui prend une même forme pour les quatre méthodes de régularisation est dérivé. Les règles régularisées obtenues sont alors comparées à la règle utilisant directement les données et à la stratégie naïve 1/N, selon leur perte espérée d’utilité et leur ratio de Sharpe. Ces performances sont mesurée dans l’échantillon (in-sample) et hors-échantillon (out-of-sample) en considérant différentes tailles d’échantillon et nombre d’actifs. Des simulations et de l’illustration empirique menées, il ressort principalement que la régularisation de la matrice de covariance améliore de manière significative la règle de Markowitz basée sur les données, et donne de meilleurs résultats que le portefeuille naïf, surtout dans les cas le problème d’erreur d’estimation est très sévère. Dans le second chapitre, nous investiguons dans quelle mesure, les portefeuilles optimaux et stables d'actifs domestiques, peuvent réduire ou éliminer le risque de devise. Pour cela nous utilisons des rendements mensuelles de 48 industries américaines, au cours de la période 1976-2008. Pour résoudre les problèmes d'instabilité inhérents aux portefeuilles de grandes tailles, nous adoptons la méthode de régularisation spectral cut-off. Ceci aboutit à une famille de portefeuilles optimaux et stables, en permettant aux investisseurs de choisir différents pourcentages des composantes principales (ou dégrées de stabilité). Nos tests empiriques sont basés sur un modèle International d'évaluation d'actifs financiers (IAPM). Dans ce modèle, le risque de devise est décomposé en deux facteurs représentant les devises des pays industrialisés d'une part, et celles des pays émergents d'autres part. Nos résultats indiquent que le risque de devise est primé et varie à travers le temps pour les portefeuilles stables de risque minimum. De plus ces stratégies conduisent à une réduction significative de l'exposition au risque de change, tandis que la contribution de la prime risque de change reste en moyenne inchangée. Les poids de portefeuille optimaux sont une alternative aux poids de capitalisation boursière. Par conséquent ce chapitre complète la littérature selon laquelle la prime de risque est importante au niveau de l'industrie et au niveau national dans la plupart des pays. Dans le dernier chapitre, nous dérivons une mesure de la prime de risque pour des préférences dépendent du rang et proposons une mesure du degré de pessimisme, étant donné une fonction de distorsion. Les mesures introduites généralisent la mesure de prime de risque dérivée dans le cadre de la théorie de l'utilité espérée, qui est fréquemment violée aussi bien dans des situations expérimentales que dans des situations réelles. Dans la grande famille des préférences considérées, une attention particulière est accordée à la CVaR (valeur à risque conditionnelle). Cette dernière mesure de risque est de plus en plus utilisée pour la construction de portefeuilles et est préconisée pour compléter la VaR (valeur à risque) utilisée depuis 1996 par le comité de Bâle. De plus, nous fournissons le cadre statistique nécessaire pour faire de l’inférence sur les mesures proposées. Pour finir, les propriétés des estimateurs proposés sont évaluées à travers une étude Monte-Carlo, et une illustration empirique en utilisant les rendements journaliers du marché boursier américain sur de la période 2000-2011.
Resumo:
The present study on the sustainability of medicinal plants in Kerala economic considerations in domestication and conservation of forest resources. There is worldwide consensus on the fact that medicinal plants are important not only in the local health support systems but in rural income and foreign exchange earnings. Sustainability of medicinal plants is important for the survival of forest dwellers, the forest ecosystem, conserving a heritage of human knowledge and overall development through linkages. More equitable sharing of the benefits from commercial utilization of the medicinal plants was found essential for the sustainability of the plants. Cultivation is very crucial for the sustainability of the sector. Through a direct tie-up with the industry, the societies can earn more income and repatriate better collection charges to its members. Cultivation should be carried out in wastelands, tiger reserves and in plantation forests. In short, the various players in the in the sector could find solution to their specific problems through co-operation and networking among them. They should rely on self-help rather than urging the government to take care of their needs. As far as the government is concerned, the forest department through checking over- exploitation of wild plants and the Agriculture Dept. through encouraging cultivation could contribute to the sustainable development of the medicinal plant sector.
Resumo:
A recurrent iterated function system (RIFS) is a genaralization of an IFS and provides nonself-affine fractal sets which are closer to natural objects. In general, it's attractor is not a continuous surface in R3. A recurrent fractal interpolation surface (RFIS) is an attractor of RIFS which is a graph of bivariate continuous interpolation function. We introduce a general method of generating recurrent interpolation surface which are at- tractors of RIFSs about any data set on a grid.
Resumo:
Ethernet is becoming the dominant aggregation technology for carrier transport networks; however, as it is a LAN technology, native bridged ethernet does not fulfill all the carrier requirements. One of the schemes proposed by the research community to make ethernet fulfill carrier requirements is ethernet VLAN-label switching (ELS). ELS allows the creation of label switched data paths using a 12-bit label encoded in the VLAN TAG control information field. Previous label switching technologies such as MPLS use more bits for encoding the label. Hence, they do not suffer from label sparsity issues as ELS might. This paper studies the sparsity issues resulting from the reduced ELS VLAN-label space and proposes the use of the label merging technique to improve label space usage. Experimental results show that label merging considerably improves label space usage
Resumo:
We consider problems of splitting and connectivity augmentation in hypergraphs. In a hypergraph G = (V +s, E), to split two edges su, sv, is to replace them with a single edge uv. We are interested in doing this in such a way as to preserve a defined level of connectivity in V . The splitting technique is often used as a way of adding new edges into a graph or hypergraph, so as to augment the connectivity to some prescribed level. We begin by providing a short history of work done in this area. Then several preliminary results are given in a general form so that they may be used to tackle several problems. We then analyse the hypergraphs G = (V + s, E) for which there is no split preserving the local-edge-connectivity present in V. We provide two structural theorems, one of which implies a slight extension to Mader’s classical splitting theorem. We also provide a characterisation of the hypergraphs for which there is no such “good” split and a splitting result concerned with a specialisation of the local-connectivity function. We then use our splitting results to provide an upper bound on the smallest number of size-two edges we must add to any given hypergraph to ensure that in the resulting hypergraph we have λ(x, y) ≥ r(x, y) for all x, y in V, where r is an integer valued, symmetric requirement function on V*V. This is the so called “local-edge-connectivity augmentation problem” for hypergraphs. We also provide an extension to a Theorem of Szigeti, about augmenting to satisfy a requirement r, but using hyperedges. Next, in a result born of collaborative work with Zoltán Király from Budapest, we show that the local-connectivity augmentation problem is NP-complete for hypergraphs. Lastly we concern ourselves with an augmentation problem that includes a locational constraint. The premise is that we are given a hypergraph H = (V,E) with a bipartition P = {P1, P2} of V and asked to augment it with size-two edges, so that the result is k-edge-connected, and has no new edge contained in some P(i). We consider the splitting technique and describe the obstacles that prevent us forming “good” splits. From this we deduce results about which hypergraphs have a complete Pk-split. This leads to a minimax result on the optimal number of edges required and a polynomial algorithm to provide an optimal augmentation.
Resumo:
We study inverse problems in neural field theory, i.e., the construction of synaptic weight kernels yielding a prescribed neural field dynamics. We address the issues of existence, uniqueness, and stability of solutions to the inverse problem for the Amari neural field equation as a special case, and prove that these problems are generally ill-posed. In order to construct solutions to the inverse problem, we first recast the Amari equation into a linear perceptron equation in an infinite-dimensional Banach or Hilbert space. In a second step, we construct sets of biorthogonal function systems allowing the approximation of synaptic weight kernels by a generalized Hebbian learning rule. Numerically, this construction is implemented by the Moore–Penrose pseudoinverse method. We demonstrate the instability of these solutions and use the Tikhonov regularization method for stabilization and to prevent numerical overfitting. We illustrate the stable construction of kernels by means of three instructive examples.
Resumo:
Let K⊆R be the unique attractor of an iterated function system. We consider the case where K is an interval and study those elements of K with a unique coding. We prove under mild conditions that the set of points with a unique coding can be identified with a subshift of finite type. As a consequence, we can show that the set of points with a unique coding is a graph-directed self-similar set in the sense of Mauldin and Williams (1988). The theory of Mauldin and Williams then provides a method by which we can explicitly calculate the Hausdorff dimension of this set. Our algorithm can be applied generically, and our result generalises the work of Daróczy, Kátai, Kallós, Komornik and de Vries.
Resumo:
A lot sizing and scheduling problem prevalent in small market-driven foundries is studied. There are two related decision levels: (I the furnace scheduling of metal alloy production, and (2) moulding machine planning which specifies the type and size of production lots. A mixed integer programming (MIP) formulation of the problem is proposed, but is impractical to solve in reasonable computing time for non-small instances. As a result, a faster relax-and-fix (RF) approach is developed that can also be used on a rolling horizon basis where only immediate-term schedules are implemented. As well as a MIP method to solve the basic RF approach, three variants of a local search method are also developed and tested using instances based on the literature. Finally, foundry-based tests with a real-order book resulted in a very substantial reduction of delivery delays and finished inventory, better use of capacity, and much faster schedule definition compared to the foundry`s own practice. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Considering a series representation of a coherent system using a shift transform of the components lifetime T-i, at its critical level Y-i, we study two problems. First, under such a shift transform, we analyse the preservation properties of the non-parametric distribution classes and secondly the association preserving property of the components lifetime under such transformations. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Neste trabalho analisamos processos estocásticos com decaimento polinomial (também chamado hiperbólico) da função de autocorrelação. Nosso estudo tem enfoque nas classes dos Processos ARFIMA e dos Processos obtidos à partir de iterações da transformação de Manneville-Pomeau. Os objetivos principais são comparar diversos métodos de estimação para o parâmetro fracionário do processo ARFIMA, nas situações de estacionariedade e não estacionariedade e, além disso, obter resultados similares para o parâmetro do processo de Manneville-Pomeau. Entre os diversos métodos de estimação para os parâmetros destes dois processos destacamos aquele baseado na teoria de wavelets por ser aquele que teve o melhor desempenho.