989 resultados para Multi-layers
Resumo:
Object recognition requires that templates with canonical views are stored in memory. Such templates must somehow be normalised. In this paper we present a novel method for obtaining 2D translation, rotation and size invariance. Cortical simple, complex and end-stopped cells provide multi-scale maps of lines, edges and keypoints. These maps are combined such that objects are characterised. Dynamic routing in neighbouring neural layers allows feature maps of input objects and stored templates to converge. We illustrate the construction of group templates and the invariance method for object categorisation and recognition in the context of a cortical architecture, which can be applied in computer vision.
Resumo:
Empirical studies concerning face recognition suggest that faces may be stored in memory by a few canonical representations. Models of visual perception are based on image representations in cortical area V1 and beyond, which contain many cell layers for feature extraction. Simple, complex and end-stopped cells provide input for line, edge and keypoint detection. Detected events provide a rich, multi-scale object representation, and this representation can be stored in memory in order to identify objects. In this paper, the above context is applied to face recognition. The multi-scale line/edge representation is explored in conjunction with keypoint-based saliency maps for Focus-of-Attention. Recognition rates of up to 96% were achieved by combining frontal and 3/4 views, and recognition was quite robust against partial occlusions.
Resumo:
This paper describes the development and the implementation of a multi-agent system for integrated diagnosis of power transformers. The system is divided in layers which contain a number of agents performing different functions. The social ability and cooperation between the agents lead to the final diagnosis and to other relevant conclusions through integrating various monitoring technologies, diagnostic methods and data sources, such as the dissolved gas analysis.
Resumo:
In the scope of the current thesis we review and analyse networks that are formed by nodes with several attributes. We suppose that different layers of communities are embedded in such networks, besides each of the layers is connected with nodes' attributes. For example, examine one of a variety of online social networks: an user participates in a plurality of different groups/communities – schoolfellows, colleagues, clients, etc. We introduce a detection algorithm for the above-mentioned communities. Normally the result of the detection is the community supplemented just by the most dominant attribute, disregarding others. We propose an algorithm that bypasses dominant communities and detects communities which are formed by other nodes' attributes. We also review formation models of the attributed networks and present a Human Communication Network (HCN) model. We introduce a High School Texting Network (HSTN) and examine our methods for that network.
Resumo:
In this paper, different recovery methods applied at different network layers and time scales are used in order to enhance the network reliability. Each layer deploys its own fault management methods. However, current recovery methods are applied to only a specific layer. New protection schemes, based on the proposed partial disjoint path algorithm, are defined in order to avoid protection duplications in a multi-layer scenario. The new protection schemes also encompass shared segment backup computation and shared risk link group identification. A complete set of experiments proves the efficiency of the proposed methods in relation with previous ones, in terms of resources used to protect the network, the failure recovery time and the request rejection ratio
Resumo:
En les xarxes IP/MPLS sobre WDM on es transporta gran quantitat d'informacio, la capacitat de garantir que el trafic arriba al node de desti ha esdevingut un problema important, ja que la fallada d'un element de la xarxa pot resultar en una gran quantitat d'informacio perduda. Per garantir que el trafic afectat per una fallada arribi al node desti, s'han definit nous algoritmes d'encaminament que incorporen el coneixement de la proteccio en els dues capes: l'optica (WDM) i la basada en paquets (IP/MPLS). D'aquesta manera s'evita reservar recursos per protegir el trafic a les dues capes. Els nous algoritmes resulten en millor us dels recursos de la xarxa, ofereixen rapid temps de recuperacio, eviten la duplicacio de recursos i disminueixen el numero de conversions del trafic de senyal optica a electrica.
Resumo:
Building refurbishment is key to reducing the carbon footprint and improving comfort in the built environment. However, quantifying the real benefit of a facade change, which can bring advantages to owners (value), occupants (comfort) and the society (sustainability), is not a simple task. At a building physics level, the changes in kWh per m2 of heating / cooling load can be readily quantified. However, there are many subtle layers of operation and mainte-nance below these headline figures which determine how sustainable a building is in reality, such as for example quality of life factors. This paper considers the range of approached taken by a fa/e refurbishment consortium to assess refurbishment solutions for multi-storey, multi-occupancy buildings and how to critically evaluate them. Each of the applued tools spans one or more of the three building parameters of people, product and process. 'De-cision making' analytical network process and parametric building analysis tools are described and their potential impact on the building refurbishment process evaluated.
Resumo:
The polymeric precursor method was used to prepare multi-layered LiNbO3 films. The overall process consists of preparing a coating solution from the Pechini process and the deposited film is subsequently heat-treated. Two-layered films were prepared by this process, onto (0001) sapphire substrates. Two different routes were investigated for the heat-treatment. The amorphous route consisted of performing, after each deposition, a pre-treatment at low temperature to eliminate the organic material. In this case, the crystallization heat-treatment was performed only after the two layers had been deposited. on the other hand, a process layer-after-layer crystallization was used. Both routes led to (0001) LiNbO3 oriented films. However, only the film prepared by the layer-after-layer crystallization presented an epitaxial growth and a crack-free morphology. Moreover, the layer-after-layer crystallization process led to a film exhibiting the best optical properties. (C) 2001 Elsevier B.V. Ltd. All rights reserved.
Resumo:
In this work fresh cables were laboratory aged under multi-stressing conditions at room temperature. Foils were peeled from cables, with approximately 150 ?m thickness, from the outer, middle and inner positions of the XLPE cable insulating layer. For samples obtained from the outer cable layer position, an increasing near-permanent electrical conduction process with aging time was observed. At the middle and inner cable layer positions a flat-loss relaxation process was observed becoming a dominating process on the ageing. In addition, PEA results confirmed that degradation in the outer region of the XLPE cables arises from the simultaneous presence of dipoles and injected space charge that distorts the internal electric field on the ageing.
Resumo:
20 years after the discovery of the first planets outside our solar system, the current exoplanetary population includes more than 700 confirmed planets around main sequence stars. Approximately 50% belong to multiple-planet systems in very diverse dynamical configurations, from two-planet hierarchical systems to multiple resonances that could only have been attained as the consequence of a smooth large-scale orbital migration. The first part of this paper reviews the main detection techniques employed for the detection and orbital characterization of multiple-planet systems, from the (now) classical radial velocity (RV) method to the use of transit time variations (TTV) for the identification of additional planetary bodies orbiting the same star. In the second part we discuss the dynamical evolution of multi-planet systems due to their mutual gravitational interactions. We analyze possible modes of motion for hierarchical, secular or resonant configurations, and what stability criteria can be defined in each case. In some cases, the dynamics can be well approximated by simple analytical expressions for the Hamiltonian function, while other configurations can only be studied with semi-analytical or numerical tools. In particular, we show how mean-motion resonances can generate complex structures in the phase space where different libration islands and circulation domains are separated by chaotic layers. In all cases we use real exoplanetary systems as working examples.
Resumo:
Unlike traditional wireless networks, characterized by the presence of last-mile, static and reliable infrastructures, Mobile ad Hoc Networks (MANETs) are dynamically formed by collections of mobile and static terminals that exchange data by enabling each other's communication. Supporting multi-hop communication in a MANET is a challenging research area because it requires cooperation between different protocol layers (MAC, routing, transport). In particular, MAC and routing protocols could be considered mutually cooperative protocol layers. When a route is established, the exposed and hidden terminal problems at MAC layer may decrease the end-to-end performance proportionally with the length of each route. Conversely, the contention at MAC layer may cause a routing protocol to respond by initiating new routes queries and routing table updates. Multi-hop communication may also benefit the presence of pseudo-centralized virtual infrastructures obtained by grouping nodes into clusters. Clustering structures may facilitate the spatial reuse of resources by increasing the system capacity: at the same time, the clustering hierarchy may be used to coordinate transmissions events inside the network and to support intra-cluster routing schemes. Again, MAC and clustering protocols could be considered mutually cooperative protocol layers: the clustering scheme could support MAC layer coordination among nodes, by shifting the distributed MAC paradigm towards a pseudo-centralized MAC paradigm. On the other hand, the system benefits of the clustering scheme could be emphasized by the pseudo-centralized MAC layer with the support for differentiated access priorities and controlled contention. In this thesis, we propose cross-layer solutions involving joint design of MAC, clustering and routing protocols in MANETs. As main contribution, we study and analyze the integration of MAC and clustering schemes to support multi-hop communication in large-scale ad hoc networks. A novel clustering protocol, named Availability Clustering (AC), is defined under general nodes' heterogeneity assumptions in terms of connectivity, available energy and relative mobility. On this basis, we design and analyze a distributed and adaptive MAC protocol, named Differentiated Distributed Coordination Function (DDCF), whose focus is to implement adaptive access differentiation based on the node roles, which have been assigned by the upper-layer's clustering scheme. We extensively simulate the proposed clustering scheme by showing its effectiveness in dominating the network dynamics, under some stressing mobility models and different mobility rates. Based on these results, we propose a possible application of the cross-layer MAC+Clustering scheme to support the fast propagation of alert messages in a vehicular environment. At the same time, we investigate the integration of MAC and routing protocols in large scale multi-hop ad-hoc networks. A novel multipath routing scheme is proposed, by extending the AOMDV protocol with a novel load-balancing approach to concurrently distribute the traffic among the multiple paths. We also study the composition effect of a IEEE 802.11-based enhanced MAC forwarding mechanism called Fast Forward (FF), used to reduce the effects of self-contention among frames at the MAC layer. The protocol framework is modelled and extensively simulated for a large set of metrics and scenarios. For both the schemes, the simulation results reveal the benefits of the cross-layer MAC+routing and MAC+clustering approaches over single-layer solutions.
Resumo:
Traditional software engineering approaches and metaphors fall short when applied to areas of growing relevance such as electronic commerce, enterprise resource planning, and mobile computing: such areas, in fact, generally call for open architectures that may evolve dynamically over time so as to accommodate new components and meet new requirements. This is probably one of the main reasons that the agent metaphor and the agent-oriented paradigm are gaining momentum in these areas. This thesis deals with the engineering of complex software systems in terms of the agent paradigm. This paradigm is based on the notions of agent and systems of interacting agents as fundamental abstractions for designing, developing and managing at runtime typically distributed software systems. However, today the engineer often works with technologies that do not support the abstractions used in the design of the systems. For this reason the research on methodologies becomes the basic point in the scientific activity. Currently most agent-oriented methodologies are supported by small teams of academic researchers, and as a result, most of them are in an early stage and still in the first context of mostly \academic" approaches for agent-oriented systems development. Moreover, such methodologies are not well documented and very often defined and presented only by focusing on specific aspects of the methodology. The role played by meta- models becomes fundamental for comparing and evaluating the methodologies. In fact a meta-model specifies the concepts, rules and relationships used to define methodologies. Although it is possible to describe a methodology without an explicit meta-model, formalising the underpinning ideas of the methodology in question is valuable when checking its consistency or planning extensions or modifications. A good meta-model must address all the different aspects of a methodology, i.e. the process to be followed, the work products to be generated and those responsible for making all this happen. In turn, specifying the work products that must be developed implies dening the basic modelling building blocks from which they are built. As a building block, the agent abstraction alone is not enough to fully model all the aspects related to multi-agent systems in a natural way. In particular, different perspectives exist on the role that environment plays within agent systems: however, it is clear at least that all non-agent elements of a multi-agent system are typically considered to be part of the multi-agent system environment. The key role of environment as a first-class abstraction in the engineering of multi-agent system is today generally acknowledged in the multi-agent system community, so environment should be explicitly accounted for in the engineering of multi-agent system, working as a new design dimension for agent-oriented methodologies. At least two main ingredients shape the environment: environment abstractions - entities of the environment encapsulating some functions -, and topology abstractions - entities of environment that represent the (either logical or physical) spatial structure. In addition, the engineering of non-trivial multi-agent systems requires principles and mechanisms for supporting the management of the system representation complexity. These principles lead to the adoption of a multi-layered description, which could be used by designers to provide different levels of abstraction over multi-agent systems. The research in these fields has lead to the formulation of a new version of the SODA methodology where environment abstractions and layering principles are exploited for en- gineering multi-agent systems.
Resumo:
It is well known that the deposition of gaseous pollutants and aerosols plays a major role in causing the deterioration of monuments and built cultural heritage in European cities. Despite of many studies dedicated to the environmental damage of cultural heritage, in case of cement mortars, commonly used in the 20th century architecture, the deterioration due to air multipollutants impact, especially the formation of black crusts, is still not well explored making this issue a challenging area of research. This work centers on cement mortars – environment interactions, focusing on the diagnosis of the damage on the modern built heritage due to air multi-pollutants. For this purpose three sites, exposed to different urban areas in Europe, were selected for sampling and subsequent laboratory analyses: Centennial Hall, Wroclaw (Poland), Chiesa dell'Autostrada del Sole, Florence (Italy), Casa Galleria Vichi, Florence (Italy). The sampling sessions were performed taking into account the height from the ground level and protection from rain run off (sheltered, partly sheltered and exposed areas). The complete characterization of collected damage layer and underlying materials was performed using a range of analytical techniques: optical and scanning electron microscopy, X ray diffractometry, differential and gravimetric thermal analysis, ion chromatography, flash combustion/gas chromatographic analysis, inductively coupled plasma-optical emission spectrometer. The data were elaborated using statistical methods (i.e. principal components analyses) and enrichment factor for cement mortars was calculated for the first time. The results obtained from the experimental activity performed on the damage layers indicate that gypsum, due to the deposition of atmospheric sulphur compounds, is the main damage product at surfaces sheltered from rain run-off at Centennial Hall and Casa Galleria Vichi. By contrast, gypsum has not been identified in the samples collected at Chiesa dell'Autostrada del Sole. This is connected to the restoration works, particularly surface cleaning, regularly performed for the maintenance of the building. Moreover, the results obtained demonstrated the correlation between the location of the building and the composition of the damage layer: Centennial Hall is mainly undergoing to the impact of pollutants emitted from the close coal power stations, whilst Casa Galleria Vichi is principally affected by pollutants from vehicular exhaust in front of the building.
Resumo:
Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.
Resumo:
Optical coherence tomography (OCT) is a well-established image modality in ophthalmology and used daily in the clinic. Automatic evaluation of such datasets requires an accurate segmentation of the retinal cell layers. However, due to the naturally low signal to noise ratio and the resulting bad image quality, this task remains challenging. We propose an automatic graph-based multi-surface segmentation algorithm that internally uses soft constraints to add prior information from a learned model. This improves the accuracy of the segmentation and increase the robustness to noise. Furthermore, we show that the graph size can be greatly reduced by applying a smart segmentation scheme. This allows the segmentation to be computed in seconds instead of minutes, without deteriorating the segmentation accuracy, making it ideal for a clinical setup. An extensive evaluation on 20 OCT datasets of healthy eyes was performed and showed a mean unsigned segmentation error of 3.05 ±0.54 μm over all datasets when compared to the average observer, which is lower than the inter-observer variability. Similar performance was measured for the task of drusen segmentation, demonstrating the usefulness of using soft constraints as a tool to deal with pathologies.