459 resultados para T-parallelism
Resumo:
A number of researchers have investigated the application of neural networks to visual recognition, with much of the emphasis placed on exploiting the network's ability to generalise. However, despite the benefits of such an approach it is not at all obvious how networks can be developed which are capable of recognising objects subject to changes in rotation, translation and viewpoint. In this study, we suggest that a possible solution to this problem can be found by studying aspects of visual psychology and in particular, perceptual organisation. For example, it appears that grouping together lines based upon perceptually significant features can facilitate viewpoint independent recognition. The work presented here identifies simple grouping measures based on parallelism and connectivity and shows how it is possible to train multi-layer perceptrons (MLPs) to detect and determine the perceptual significance of any group presented. In this way, it is shown how MLPs which are trained via backpropagation to perform individual grouping tasks, can be brought together into a novel, large scale network capable of determining the perceptual significance of the whole input pattern. Finally the applicability of such significance values for recognition is investigated and results indicate that both the NILP and the Kohonen Feature Map can be trained to recognise simple shapes described in terms of perceptual significances. This study has also provided an opportunity to investigate aspects of the backpropagation algorithm, particularly the ability to generalise. In this study we report the results of various generalisation tests. In applying the backpropagation algorithm to certain problems, we found that there was a deficiency in performance with the standard learning algorithm. An improvement in performance could however, be obtained when suitable modifications were made to the algorithm. The modifications and consequent results are reported here.
Resumo:
This study considers the application of image analysis in petrography and investigates the possibilities for advancing existing techniques by introducing feature extraction and analysis capabilities of a higher level than those currently employed. The aim is to construct relevant, useful descriptions of crystal form and inter-crystal relations in polycrystalline igneous rock sections. Such descriptions cannot be derived until the `ownership' of boundaries between adjacent crystals has been established: this is the fundamental problem of crystal boundary assignment. An analysis of this problem establishes key image features which reveal boundary ownership; a set of explicit analysis rules is presented. A petrographic image analysis scheme based on these principles is outlined and the implementation of key components of the scheme considered. An algorithm for the extraction and symbolic representation of image structural information is developed. A new multiscale analysis algorithm which produces a hierarchical description of the linear and near-linear structure on a contour is presented in detail. Novel techniques for symmetry analysis are developed. The analyses considered contribute both to the solution of the boundary assignment problem and to the construction of geologically useful descriptions of crystal form. The analysis scheme which is developed employs grouping principles such as collinearity, parallelism, symmetry and continuity, so providing a link between this study and more general work in perceptual grouping and intermediate level computer vision. Consequently, the techniques developed in this study may be expected to find wider application beyond the petrographic domain.
Resumo:
Computational performance increasingly depends on parallelism, and many systems rely on heterogeneous resources such as GPUs and FPGAs to accelerate computationally intensive applications. However, implementations for such heterogeneous systems are often hand-crafted and optimised to one computation scenario, and it can be challenging to maintain high performance when application parameters change. In this paper, we demonstrate that machine learning can help to dynamically choose parameters for task scheduling and load-balancing based on changing characteristics of the incoming workload. We use a financial option pricing application as a case study. We propose a simulation of processing financial tasks on a heterogeneous system with GPUs and FPGAs, and show how dynamic, on-line optimisations could improve such a system. We compare on-line and batch processing algorithms, and we also consider cases with no dynamic optimisations.
Resumo:
Membrane systems are computational equivalent to Turing machines. However, its distributed and massively parallel nature obtain polynomial solutions opposite to traditional non-polynomial ones. Nowadays, developed investigation for implementing membrane systems has not yet reached the massively parallel character of this computational model. Better published approaches have achieved a distributed architecture denominated “partially parallel evolution with partially parallel communication” where several membranes are allocated at each processor, proxys are used to communicate with membranes allocated at different processors and a policy of access control to the communications is mandatory. With these approaches, it is obtained processors parallelism in the application of evolution rules and in the internal communication among membranes allocated inside each processor. Even though, external communications share a common communication line, needed for the communication among membranes arranged in different processors, are sequential. In this work, we present a new hierarchical architecture that reaches external communication parallelism among processors and substantially increases parallelization in the application of evolution rules and internal communications. Consequently, necessary time for each evolution step is reduced. With all of that, this new distributed hierarchical architecture is near to the massively parallel character required by the model.
Resumo:
The basic construction concepts of many-valued intellectual systems, which are adequate to primal problems of person activity and using hybrid tools with many-valued intellectual systems being two-place, but simulating neuron processes of space toting which are different on a level of actions, inertial and threshold of properties of neuron diaphragms, and also frequency modification of the following transmitted messages are created. All enumerated properties and functions in point of fact are essential not only are discrete on time, but also many-valued.
Resumo:
This article presents the principal results of the Ph.D. thesis Investigation and classification of doubly resolvable designs by Stela Zhelezova (Institute of Mathematics and Informatics, BAS), successfully defended at the Specialized Academic Council for Informatics and Mathematical Modeling on 22 February 2010.
Resumo:
A szerző tanulmányában a 2000 és 2010 közötti gazdasági és társadalmi változásokat és a munkával kapcsolatos értékek és attitűdök alakulását állítja időbeli párhuzamba, és azokra a kérdésekre keresi a választ, hogy milyen prioritások jellemzik 2010-ben a munka egyes aspektusaihoz való viszonyulást Közép-Kelet- Európa uniós tagállamaiban, és közöttük kiemelten Magyarországon? Vajon változtak-e ezek a jellemzők az elmúlt tíz év során, s ha igen, milyen irányú az elmozdulás. Azonos tendenciájúak-e a változások a vizsgált országokban? Megállapíthatók-e jellegzetes különbségek három kiválasztott, magas életszínvonalú, régebbi uniós (EU-15) tagországhoz viszonyítva? Elemzése jelentős mértékben támaszkodik az EVSfelmérés 2010-ben közzétett adataira, melynek alapját a 2008 és 2009 során lefolytatott adatgyűjtés képezi. Tanulmányában azokra a közép-kelet-európai országokra összpontosít, amelyek az elmúlt évtized során az unió tagjai lettek, és a 2000-es felmérésben is részt vettek már. _____ The author sets into parallelism economic and social changes and the development of values and attitudes regarding labour between 2000 and 2010, and looks for answers to questions what priorities characterize in 2010 the attitude to some aspects of labour in EU member countries of CEE, and especially in Hungary. Have these features changed in the last decade and in positive case, what is the direction of the shift? Are changes of similar tendency in countries under consideration? Can considerable differences be stated as compared to three selected, earlier EU 15 member countries with high living standard? The analysis largely relies on data of EVS survey, published in 2010 whose basis is the data collection carried out in 2008-2009. The essay focuses on Central Eastern European countries which have become members of the Union during the last decade and took part already in the survey of 2000.
Resumo:
The development of 3G (the 3rd generation telecommunication) value-added services brings higher requirements of Quality of Service (QoS). Wideband Code Division Multiple Access (WCDMA) is one of three 3G standards, and enhancement of QoS for WCDMA Core Network (CN) becomes more and more important for users and carriers. The dissertation focuses on enhancement of QoS for WCDMA CN. The purpose is to realize the DiffServ (Differentiated Services) model of QoS for WCDMA CN. Based on the parallelism characteristic of Network Processors (NPs), the NP programming model is classified as Pool of Threads (POTs) and Hyper Task Chaining (HTC). In this study, an integrated programming model that combines both of the two models was designed. This model has highly efficient and flexible features, and also solves the problems of sharing conflicts and packet ordering. We used this model as the programming model to realize DiffServ QoS for WCDMA CN. ^ The realization mechanism of the DiffServ model mainly consists of buffer management, packet scheduling and packet classification algorithms based on NPs. First, we proposed an adaptive buffer management algorithm called Packet Adaptive Fair Dropping (PAFD), which takes into consideration of both fairness and throughput, and has smooth service curves. Then, an improved packet scheduling algorithm called Priority-based Weighted Fair Queuing (PWFQ) was introduced to ensure the fairness of packet scheduling and reduce queue time of data packets. At the same time, the delay and jitter are also maintained in a small range. Thirdly, a multi-dimensional packet classification algorithm called Classification Based on Network Processors (CBNPs) was designed. It effectively reduces the memory access and storage space, and provides less time and space complexity. ^ Lastly, an integrated hardware and software system of the DiffServ model of QoS for WCDMA CN was proposed. It was implemented on the NP IXP2400. According to the corresponding experiment results, the proposed system significantly enhanced QoS for WCDMA CN. It extensively improves consistent response time, display distortion and sound image synchronization, and thus increases network efficiency and saves network resource.^
Resumo:
As massive data sets become increasingly available, people are facing the problem of how to effectively process and understand these data. Traditional sequential computing models are giving way to parallel and distributed computing models, such as MapReduce, both due to the large size of the data sets and their high dimensionality. This dissertation, as in the same direction of other researches that are based on MapReduce, tries to develop effective techniques and applications using MapReduce that can help people solve large-scale problems. Three different problems are tackled in the dissertation. The first one deals with processing terabytes of raster data in a spatial data management system. Aerial imagery files are broken into tiles to enable data parallel computation. The second and third problems deal with dimension reduction techniques that can be used to handle data sets of high dimensionality. Three variants of the nonnegative matrix factorization technique are scaled up to factorize matrices of dimensions in the order of millions in MapReduce based on different matrix multiplication implementations. Two algorithms, which compute CANDECOMP/PARAFAC and Tucker tensor decompositions respectively, are parallelized in MapReduce based on carefully partitioning the data and arranging the computation to maximize data locality and parallelism.
Resumo:
This study proposes a solution responsible for scheduling data processing with variable demand in cloud environments. The system built check specific variables to the business context of a company incubated at Digital Metropole Institute of UFRN. Such a system generates an identification strategy machinery designs available in a cloud environment, focusing on processing performance, using data load balancing strategies and activities of parallelism in the software execution flow. The goal is to meet the seasonal demand within a standard time limit set by the company, controlling operating costs by using cloud services in the IaaS layer.
Resumo:
En el presente trabajo, se analizan los factores que intervienen en la interpretación de enunciados asumiendo que, en tanto proceso, la interpretación supone la interacción de distintos sistemas modulares de la mente: La Facultad del Lenguaje (FdeL), y los sistemas sensorio-motriz (SM) y conceptual-intencional (CI). Este último incluye mecanismos para la elaboración de representaciones conceptuales (C) y mecanismos inferenciales involucrados más globalmente en la fijación de estados intencionales (I). Estos sistemas externos a la Facultad del Lenguaje imponen restricciones de manera tal que los núcleos de información que llegan a las interfaces sean legibles por los sistemas SM y CI (Chomsky, 1995-2008). En este sentido, entender la interpretación como proceso conlleva atender a la relación entre los aspectos puramente lingüísticos (sintaxis, semántica), los prosódicos, y los inferenciales (pragmática). En este trabajo, se busca comprender el funcionamiento de aquellos elementos del sistema lingüístico que propician la obtención de los supuestos necesarios para llevar a cabo ese proceso. ;Adoptando un paralelismo entre las categorías sintácticas y las semánticas postuladas en el relevantismo, se intenta dilucidar cómo actúa la variada evidencia lingüística que provee un H en la comunicación, de modo que un O puede arribar a alguna hipótesis del 'significado de H'. A partir del análisis de distintos enunciados, se explora el alcance de ese paralelismo y se llega a una caracterización tentativa del proceso de interpretación. Finalmente, siguiendo ideas de distintos autores, se propone concebir la interfaz FdeL-CI a partir de pares primitivos de información, que son relevantes a todos los sistemas cognitivos involucrados en la comunicación
Resumo:
En el presente trabajo, se analizan los factores que intervienen en la interpretación de enunciados asumiendo que, en tanto proceso, la interpretación supone la interacción de distintos sistemas modulares de la mente: La Facultad del Lenguaje (FdeL), y los sistemas sensorio-motriz (SM) y conceptual-intencional (CI). Este último incluye mecanismos para la elaboración de representaciones conceptuales (C) y mecanismos inferenciales involucrados más globalmente en la fijación de estados intencionales (I). Estos sistemas externos a la Facultad del Lenguaje imponen restricciones de manera tal que los núcleos de información que llegan a las interfaces sean legibles por los sistemas SM y CI (Chomsky, 1995-2008). En este sentido, entender la interpretación como proceso conlleva atender a la relación entre los aspectos puramente lingüísticos (sintaxis, semántica), los prosódicos, y los inferenciales (pragmática). En este trabajo, se busca comprender el funcionamiento de aquellos elementos del sistema lingüístico que propician la obtención de los supuestos necesarios para llevar a cabo ese proceso. ;Adoptando un paralelismo entre las categorías sintácticas y las semánticas postuladas en el relevantismo, se intenta dilucidar cómo actúa la variada evidencia lingüística que provee un H en la comunicación, de modo que un O puede arribar a alguna hipótesis del 'significado de H'. A partir del análisis de distintos enunciados, se explora el alcance de ese paralelismo y se llega a una caracterización tentativa del proceso de interpretación. Finalmente, siguiendo ideas de distintos autores, se propone concebir la interfaz FdeL-CI a partir de pares primitivos de información, que son relevantes a todos los sistemas cognitivos involucrados en la comunicación
Resumo:
En el presente trabajo, se analizan los factores que intervienen en la interpretación de enunciados asumiendo que, en tanto proceso, la interpretación supone la interacción de distintos sistemas modulares de la mente: La Facultad del Lenguaje (FdeL), y los sistemas sensorio-motriz (SM) y conceptual-intencional (CI). Este último incluye mecanismos para la elaboración de representaciones conceptuales (C) y mecanismos inferenciales involucrados más globalmente en la fijación de estados intencionales (I). Estos sistemas externos a la Facultad del Lenguaje imponen restricciones de manera tal que los núcleos de información que llegan a las interfaces sean legibles por los sistemas SM y CI (Chomsky, 1995-2008). En este sentido, entender la interpretación como proceso conlleva atender a la relación entre los aspectos puramente lingüísticos (sintaxis, semántica), los prosódicos, y los inferenciales (pragmática). En este trabajo, se busca comprender el funcionamiento de aquellos elementos del sistema lingüístico que propician la obtención de los supuestos necesarios para llevar a cabo ese proceso. ;Adoptando un paralelismo entre las categorías sintácticas y las semánticas postuladas en el relevantismo, se intenta dilucidar cómo actúa la variada evidencia lingüística que provee un H en la comunicación, de modo que un O puede arribar a alguna hipótesis del 'significado de H'. A partir del análisis de distintos enunciados, se explora el alcance de ese paralelismo y se llega a una caracterización tentativa del proceso de interpretación. Finalmente, siguiendo ideas de distintos autores, se propone concebir la interfaz FdeL-CI a partir de pares primitivos de información, que son relevantes a todos los sistemas cognitivos involucrados en la comunicación
Resumo:
The Mobile Network Optimization (MNO) technologies have advanced at a tremendous pace in recent years. And the Dynamic Network Optimization (DNO) concept emerged years ago, aimed to continuously optimize the network in response to variations in network traffic and conditions. Yet, DNO development is still at its infancy, mainly hindered by a significant bottleneck of the lengthy optimization runtime. This paper identifies parallelism in greedy MNO algorithms and presents an advanced distributed parallel solution. The solution is designed, implemented and applied to real-life projects whose results yield a significant, highly scalable and nearly linear speedup up to 6.9 and 14.5 on distributed 8-core and 16-core systems respectively. Meanwhile, optimization outputs exhibit self-consistency and high precision compared to their sequential counterpart. This is a milestone in realizing the DNO. Further, the techniques may be applied to similar greedy optimization algorithm based applications.