50 resultados para Markov Clustering, GPI Computing, PPI Networks, CUDA, ELPACK-R Sparse Format, Parallel Computing
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Integrated manufacturing constitutes a complex system made of heterogeneous information and control subsystems. Those subsystems are not designed to the cooperation. Typically each subsystem automates specific processes, and establishes closed application domains, therefore it is very difficult to integrate it with other subsystems in order to respond to the needed process dynamics. Furthermore, to cope with ever growing marketcompetition and demands, it is necessary for manufacturing/enterprise systems to increase their responsiveness based on up-to-date knowledge and in-time data gathered from the diverse information and control systems. These have created new challenges for manufacturing sector, and even bigger challenges for collaborative manufacturing. The growing complexity of the information and communication technologies when coping with innovative business services based on collaborative contributions from multiple stakeholders, requires novel and multidisciplinary approaches. Service orientation is a strategic approach to deal with such complexity, and various stakeholders' information systems. Services or more precisely the autonomous computational agents implementing the services, provide an architectural pattern able to cope with the needs of integrated and distributed collaborative solutions. This paper proposes a service-oriented framework, aiming to support a virtual organizations breeding environment that is the basis for establishing short or long term goal-oriented virtual organizations. The notion of integrated business services, where customers receive some value developed through the contribution from a network of companies is a key element.
Resumo:
Although the computational power of mobile devices has been increasing, it is still not enough for some classes of applications. In the present, these applications delegate the computing power burden on servers located on the Internet. This model assumes an always-on Internet connectivity and implies a non-negligible latency. The thesis addresses the challenges and contributions posed to the application of a mobile collaborative computing environment concept to wireless networks. The goal is to define a reference architecture for high performance mobile applications. Current work is focused on efficient data dissemination on a highly transitive environment, suitable to many mobile applications and also to the reputation and incentive system available on this mobile collaborative computing environment. For this we are improving our already published reputation/incentive algorithm with knowledge from the usage pattern from the eduroam wireless network in the Lisbon area.
Resumo:
The complexity associated with fast growing of B2B and the lack of a (complete) suite of open standards makes difficulty to maintain the underlying collaborative processes. Aligned to this challenge, this paper aims to be a contribution to an open architecture of logistics and transport processes management system. A model of an open integrated system is being defined as an open computational responsibility from the embedded systems (on-board) as well as a reference implementation (prototype) of a host system to validate the proposed open interfaces. Embedded subsystem can, natively, be prepared to cooperate with other on-board units and with IT-systems in an infrastructure commonly referred to as a center information system or back-office. In interaction with a central system the proposal is to adopt an open framework for cooperation where the embedded unit or the unit placed somewhere (land/sea) interacts in response to a set of implemented capabilities.
Resumo:
A necessidade de poder computacional é crescente nas diversas áreas de actuação humana, tanto na indústria, como em ambientes académicos. Grid Computing permite a ligação de recursos computacionais dispersos de maneira a permitir a sua utilização mais eficaz, fornecendo aos utilizadores um acesso simplificado ao poder computacional de diversos sistemas. Os primeiros projectos de Grid Computing implicavam a ligação de máquinas paralelas ou aglomerados de alto desempenho e alto custo, disponíveis apenas em algumas instituições. Contrastando com o elevado custo dos super-computadores, os computadores pessoais e a Internet sofreram uma evolução significativa nos últimos anos. O uso de computadores dispersos em uma WAN pode representar um ambiente muito interessante para processamento de alto desempenho. Os sistemas em Grid fornecem a possibilidade de se utilizar um conjunto de computadores pessoais de modo a fornecer uma computação que utiliza recursos que de outra maneira estariam omissos. Este trabalho consiste no estudo de Grid Computing a nível de conceito e de arquitectura e numa análise ao seu estado actual hoje em dia. Como complemento foi desenvolvido um componente que permite o desenvolvimento de serviços para Grids (Grid Services) mais eficaz do que o modelo de suporte a serviços actualmente utilizado. Este componente é disponibilizado sob a forma um plug-in para a plataforma Eclipse IDE.
Resumo:
As vias de comunicação são indispensáveis para o desenvolvimento de uma nação, económica e socialmente. Num mundo globalizado, onde tudo deve chegar ao seu destino no menor espaço de tempo, as vias de comunicação assumem um papel vital. Assim, torna-se essencial construir e manter uma rede de transportes eficiente. Apesar de não ser o método mais eficiente, o transporte rodoviário é muitas vezes o mais económico e possibilita o transporte porta-a-porta, sendo em muitos casos o único meio de transporte possível. Por estas razões, o modo rodoviário tem uma quota significativa no mercado dos transportes, seja de passageiros ou mercadorias, tornando-o extremamente importante na rede de transportes de um país. Os países europeus fizeram um grande investimento na criação de extensas redes de estradas, cobrindo quase todo o seu território. Neste momento, começa-se a atingir o ponto onde a principal preocu+ação das entidades gestoras de estradas deixa de ser a construção de novas vias, passando a focar-se na necessidade de manutenção e conservação das vias existentes. Os pavimentos rodoviários, como todas as outras construções, requerem manutenção de forma a garantir bons níveis de serviço com qualidade, conforto e segurança. Devido aos custos inerentes às operações de manutenção de pavimentos, estas devem rigorosamente e com base em critérios científicos bem definidos. Assim, pretende-se evitar intervenções desnecessárias, mas também impedir que os danos se tornem irreparáveis e economicamente prejudiciais, com repercussões na segurança dos utilizadores. Para se estimar a vida útil de um pavimento é essencial realizar primeiro a caracterização estrutural do mesmo. Para isso, torna-se necessário conhecer o tipo de estrutura de um pavimento, nomeadamente a espessura e o módulo de elasticidade constituintes. A utilização de métodos de ensaio não destrutivos é cada vez mais reconhecida como uma forma eficaz para obter informações sobre o comportamento estrutural de pavimentos. Para efectuar estes ensaios, existem vários equipamentos. No entanto, dois deles, o Deflectómetro de Impacto e o Radar de Prospecção, têm demonstrado ser particularmente eficientes para avaliação da capacidade de carga de um pavimento, sendo estes equipamentos utilizados no âmbito deste estudo. Assim, para realização de ensaios de carga em pavimentos, o equipamento Deflectómetro de Impacto tem sido utilizado com sucesso para medir as deflexões à superfície de um pavimento em pontos pré-determinados quando sujeito a uma carga normalizada de forma a simular o efeito da passagem da roda de um camião. Complementarmente, para a obtenção de informações contínuas sobre a estrutura de um pavimento, o equipamento Radar de Prospecção permite conhecer o número de camadas e as suas espessuras através da utilização de ondas electromagnéticas. Os dados proporcionam, quando usados em conjunto com a realização de sondagens à rotação e poços em alguns locais, permitem uma caracterização mais precisa da condição estrutural de um pavimento e o estabelecimento de modelos de resposta, no caso de pavimentos existentes. Por outro lado, o processamento dos dados obtidos durante os ensaios “in situ” revela-se uma tarefa morosa e complexa. Actualmente, utilizando as espessuras das camadas do pavimento, os módulos de elasticidade das camadas são calculados através da “retro-análise” da bacia de deflexões medida nos ensaios de carga. Este método é iterativo, sendo que um engenheiro experiente testa várias estruturas diferentes de pavimento, até se obter uma estrutura cuja resposta seja o mais próximo possível da obtida durante os ensaios “in Situ”. Esta tarefa revela-se muito dependente da experiência do engenheiro, uma vez que as estruturas de pavimento a serem testadas maioritariamente do seu raciocínio. Outra desvantagem deste método é o facto de apresentar soluções múltiplas, dado que diferentes estruturas podem apresentar modelos de resposta iguais. A solução aceite é, muitas vezes, a que se julga mais provável, baseando-se novamente no raciocínio e experiência do engenheiro. A solução para o problema da enorme quantidade de dados a processar e das múltiplas soluções possíveis poderá ser a utilização de Redes Neuronais Artificiais (RNA) para auxiliar esta tarefa. As redes neuronais são elementos computacionais virtuais, cujo funcionamento é inspirado na forma como os sistemas nervosos biológicos, como o cérebro, processam a informação. Estes elementos são compostos por uma série de camadas, que por sua vez são compostas por neurónios. Durante a transmissão da informação entre neurónios, esta é modificada pela aplicação de um coeficiente, denominado “peso”. As redes neuronais apresentam uma habilidade muito útil, uma vez que são capazes de mapear uma função sem conhecer a sua fórmula matemática. Esta habilidade é utilizada em vários campos científicos como o reconhecimento de padrões, classificação ou compactação de dados. De forma a possibilitar o uso desta característica, a rede deverá ser devidamente “treinada” antes, processo realizado através da introdução de dois conjuntos de dados: os valores de entrada e os valores de saída pretendidos. Através de um processo cíclico de propagação da informação através das ligações entre neurónios, as redes ajustam-se gradualmente, apresentando melhores resultados. Apesar de existirem vários tipos de redes, as que aparentam ser as mais aptas para esta tarefa são as redes de retro-propagação. Estas possuem uma característica importante, nomeadamente o treino denominado “treino supervisionado”. Devido a este método de treino, as redes funcionam dentro da gama de variação dos dados fornecidos para o “treino” e, consequentemente, os resultados calculados também se encontram dentro da mesma gama, impedindo o aparecimento de soluções matemáticas com impossibilidade prática. De forma a tornar esta tarefa ainda mais simples, foi desenvolvido um programa de computador, NNPav, utilizando as RNA como parte integrante do seu processo de cálculo. O objectivo é tornar o processo de “retro-análise” totalmente automático e prevenir erros induzidos pela falta de experiência do utilizador. De forma a expandir ainda mais as funcionalidades do programa, foi implementado um processo de cálculo que realiza uma estimativa da capacidade de carga e da vida útil restante do pavimento, recorrendo a dois critérios de ruína. Estes critérios são normalmente utilizados no dimensionamento de pavimentos, de forma a prevenir o fendilhamento por fadiga e as deformações permanentes. Desta forma, o programa criado permite a estimativa da vida útil restante de um pavimento de forma eficiente, directamente a partir das deflexões e espessuras das camadas, medidas nos ensaios “in situ”. Todos os passos da caracterização estrutural do pavimento são efectuados pelo NNPav, seja recorrendo à utilização de redes neuronais ou a processos de cálculo matemático, incluindo a correcção do módulo de elasticidade da camada de misturas betuminosas para a temperatura de projecto e considerando as características de tráfego e taxas de crescimento do mesmo. Os testes efectuados às redes neuronais revelaram que foram alcançados resultados satisfatórios. Os níveis de erros na utilização de redes neuronais são semelhantes aos obtidos usando modelos de camadas linear-elásticas, excepto para o cálculo da vida útil com base num dos critérios, onde os erros obtidos foram mais altos. No entanto, este processo revela-se bastante mais rápido e possibilita o processamento dos dados por pessoal com menos experiência. Ao mesmo tempo, foi assegurado que nos ficheiros de resultados é possível analisar todos os dados calculados pelo programa, em várias fases de processamento de forma a permitir a análise detalhada dos mesmos. A possibilidade de estimar a capacidade de carga e a vida útil restante de um pavimento, contempladas no programa desenvolvido, representam também ferramentas importantes. Basicamente, o NNPav permite uma análise estrutural completa de um pavimento, estimando a sua vida útil com base nos ensaios de campo realizados pelo Deflectómetro de Impacto e pelo Radar de Prospecção, num único passo. Complementarmente, foi ainda desenvolvido e implementado no NNPav um módulo destinado ao dimensionamento de pavimentos novos. Este módulo permite que, dado um conjunto de estruturas de pavimento possíveis, seja estimada a capacidade de carga e a vida útil daquele pavimento. Este facto permite a análise de uma grande quantidade de estruturas de pavimento, e a fácil comparação dos resultados no ficheiro exportado. Apesar dos resultados obtidos neste trabalho serem bastante satisfatórios, os desenvolvimentos futuros na aplicação de Redes Neuronais na avaliação de pavimentos são ainda mais promissores. Uma vez que este trabalho foi limitado a uma moldura temporal inerente a um trabalho académico, a possibilidade de melhorar ainda mais a resposta das RNA fica em aberto. Apesar dos vários testes realizados às redes, de forma a obter as arquitecturas que apresentassem melhores resultados, as arquitecturas possíveis são virtualmente ilimitadas e pode ser uma área a aprofundar. As funcionalidades implementadas no programa foram as possíveis, dentro da moldura temporal referida, mas existem muitas funcionalidades a serem adicinadas ou expandidas, aumentando a funcionalidade do programa e a sua produtividade. Uma vez que esta é uma ferramenta que pode ser aplicada ao nível de gestão de redes rodoviárias, seria necessário estudar e desenvolver redes similares de forma a avaliar outros tipos de estruturas de pavimentos. Como conclusão final, apesar dos vários aspectos que podem, e devem ser melhorados, o programa desenvolvido provou ser uma ferramenta bastante útil e eficiente na avaliação estrutural de pavimentos com base em métodos de ensaio não destrutivos.
Resumo:
Collaborative networks are typically formed by heterogeneous and autonomous entities, and thus it is natural that each member has its own set of core-values. Since these values somehow drive the behaviour of the involved entities, the ability to quickly identify partners with compatible or common core-values represents an important element for the success of collaborative networks. However, tools to assess or measure the level of alignment of core-values are lacking. Since the concept of 'alignment' in this context is still ill-defined and shows a multifaceted nature, three perspectives are discussed. The first one uses a causal maps approach in order to capture, structure, and represent the influence relationships among core-values. This representation provides the basis to measure the alignment in terms of the structural similarity and influence among value systems. The second perspective considers the compatibility and incompatibility among core-values in order to define the alignment level. Under this perspective we propose a fuzzy inference system to estimate the alignment level, since this approach allows dealing with variables that are vaguely defined, and whose inter-relationships are difficult to define. Another advantage provided by this method is the possibility to incorporate expert human judgment in the definition of the alignment level. The last perspective uses a belief Bayesian network method, and was selected in order to assess the alignment level based on members' past behaviour. An example of application is presented where the details of each method are discussed.
Resumo:
This paper proposes artificial neural networks in combination with wavelet transform for short-term wind power forecasting in Portugal. The increased integration of wind power into the electric grid, as nowadays occurs in Portugal, poses new challenges due to its intermittency and volatility. Hence, good forecasting tools play a key role in tackling these challenges. Results from a real-world case study are presented. A comparison is carried out, taking into account the results obtained with other approaches. Finally, conclusions are duly drawn. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This work describes a methodology to extract symbolic rules from trained neural networks. In our approach, patterns on the network are codified using formulas on a Lukasiewicz logic. For this we take advantage of the fact that every connective in this multi-valued logic can be evaluated by a neuron in an artificial network having, by activation function the identity truncated to zero and one. This fact simplifies symbolic rule extraction and allows the easy injection of formulas into a network architecture. We trained this type of neural network using a back-propagation algorithm based on Levenderg-Marquardt algorithm, where in each learning iteration, we restricted the knowledge dissemination in the network structure. This makes the descriptive power of produced neural networks similar to the descriptive power of Lukasiewicz logic language, minimizing the information loss on the translation between connectionist and symbolic structures. To avoid redundance on the generated network, the method simplifies them in a pruning phase, using the "Optimal Brain Surgeon" algorithm. We tested this method on the task of finding the formula used on the generation of a given truth table. For real data tests, we selected the Mushrooms data set, available on the UCI Machine Learning Repository.
Resumo:
We investigate the phase behaviour of 2D mixtures of bi-functional and three-functional patchy particles and 3D mixtures of bi-functional and tetra-functional patchy particles by means of Monte Carlo simulations and Wertheim theory. We start by computing the critical points of the pure systems and then we investigate how the critical parameters change upon lowering the temperature. We extend the successive umbrella sampling method to mixtures to make it possible to extract information about the phase behaviour of the system at a fixed temperature for the whole range of densities and compositions of interest. (C) 2013 AIP Publishing LLC.
Resumo:
In general, modern networks are analysed by taking several Key Performance Indicators (KPIs) into account, their proper balance being required in order to guarantee a desired Quality of Service (QoS), particularly, cellular wireless heterogeneous networks. A model to integrate a set of KPIs into a single one is presented, by using a Cost Function that includes these KPIs, providing for each network node a single evaluation parameter as output, and reflecting network conditions and common radio resource management strategies performance. The proposed model enables the implementation of different network management policies, by manipulating KPIs according to users' or operators' perspectives, allowing for a better QoS. Results show that different policies can in fact be established, with a different impact on the network, e.g., with median values ranging by a factor higher than two.
Resumo:
Processes are a central entity in enterprise collaboration. Collaborative processes need to be executed and coordinated in a distributed Computational platform where computers are connected through heterogeneous networks and systems. Life cycle management of such collaborative processes requires a framework able to handle their diversity based on different computational and communication requirements. This paper proposes a rational for such framework, points out key requirements and proposes it strategy for a supporting technological infrastructure. Beyond the portability of collaborative process definitions among different technological bindings, a framework to handle different life cycle phases of those definitions is presented and discussed. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
In this work, we present a neural network (NN) based method designed for 3D rigid-body registration of FMRI time series, which relies on a limited number of Fourier coefficients of the images to be aligned. These coefficients, which are comprised in a small cubic neighborhood located at the first octant of a 3D Fourier space (including the DC component), are then fed into six NN during the learning stage. Each NN yields the estimates of a registration parameter. The proposed method was assessed for 3D rigid-body transformations, using DC neighborhoods of different sizes. The mean absolute registration errors are of approximately 0.030 mm in translations and 0.030 deg in rotations, for the typical motion amplitudes encountered in FMRI studies. The construction of the training set and the learning stage are fast requiring, respectively, 90 s and 1 to 12 s, depending on the number of input and hidden units of the NN. We believe that NN-based approaches to the problem of FMRI registration can be of great interest in the future. For instance, NN relying on limited K-space data (possibly in navigation echoes) can be a valid solution to the problem of prospective (in frame) FMRI registration.
Resumo:
Thesis submitted in the fulfilment of the requirements for the Degree of Master in Electronic and Telecomunications Engineering
Resumo:
Micro- and nano-patterned materials are of great importance for the design of new nanoscale electronic, optical and mechanical devices, ranging from sensors to displays. A prospective system that can support a designed functionality is elastomeric polyurethane thin films with nano- or micromodulated surface structures ("wrinkles"). These wrinkles can be induced on different lengthscales by mechanically stretching the films, without the need for any sophisticated lithographic techniques. In the present article we focus on the experimental control of the wrinkling process. A simple model for wrinkle formation is also discussed, and some preliminary results reported. Hierarchical assembly of these tunable structures paves the way for the development of a new class of materials with a wide range of applications, from electronics to biomedicine.