958 resultados para Parallel programming (computer science)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and donât provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In programming languages with dynamic use of memory, such as Java, knowing that a reference variable x points to an acyclic data structure is valuable for the analysis of termination and resource usage (e.g., execution time or memory consumption). For instance, this information guarantees that the depth of the data structure to which x points is greater than the depth of the data structure pointed to by x.f for any field f of x. This, in turn, allows bounding the number of iterations of a loop which traverses the structure by its depth, which is essential in order to prove the termination or infer the resource usage of the loop. The present paper provides an Abstract-Interpretation-based formalization of a static analysis for inferring acyclicity, which works on the reduced product of two abstract domains: reachability, which models the property that the location pointed to by a variable w can be reached by dereferencing another variable v (in this case, v is said to reach w); and cyclicity, modeling the property that v can point to a cyclic data structure. The analysis is proven to be sound and optimal with respect to the chosen abstraction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las Redes de Procesadores Evolutivos-NEP propuestas en [Mitrana et al., 2001], son un modelo computacional bio-inspirado a partir de la evolución de poblaciones de células, definiendo a nivel sintáctico algunas propiedades biológicas. En este modelo, las células están representadas por medio de palabras que describen secuencias de ADN. Informalmente, en algún instante de tiempo, el sistema evolutivo está representado por una colección de palabras cada una de las cuales representa una célula. El espacio genotipo de las especies, es un conjunto que recoge aquellas palabras que son aceptadas como sobrevivientes (es decir, como \correctas"). Desde el punto de vista de la evolución, las células pertenecen a especies y su comunidad evoluciona de acuerdo a procesos biológicos como la mutación y la división celular. éstos procesos representan el proceso natural de evolución y ponen de manifiesto una característica intrínseca de la naturaleza: el paralelismo. En este modelo, estos procesos son vistos como operaciones sobre palabras. Formalmente, el modelo de las NEP constituyen una arquitectura paralela y distribuida de procesamiento simbólico inspirada en la Máquina de conexión [Hillis, 1981], en el Paradigma de Flujo Lógico [Errico and Jesshope, 1994] y en las Redes de Procesadores Paralelos de Lenguajes (RPPL) [Csuhaj-Varju and Salomaa, 1997]. Al modelo NEP se han ido agregando nuevas y novedosas extensiones hasta el punto que actualmente podemos hablar de una familia de Redes de Procesadores Bio-inspirados (NBP) [Mitrana et al., 2012b]. Un considerable número de trabajos a lo largo de los últimos años han demostrado la potencia computacional de la familia NBP. En general, éstos modelos son computacionalmente completos, universales y eficientes [Manea et al., 2007], [Manea et al., 2010b], [Mitrana and Martín-Vide, 2005]. De acuerdo a lo anterior, se puede afirmar que el modelo NEP ha adquirido hasta el momento un nivel de madurez considerable. Sin embargo, aunque el modelo es de inspiración biológica, sus metas siguen estando motivadas en la Teoría de Lenguajes Formales y las Ciencias de la Computación. En este sentido, los aspectos biológicos han sido abordados desde una perspectiva cualitativa y el acercamiento a la realidad biológica es de forma meramente sintáctica. Para considerar estos aspectos y lograr dicho acercamiento es necesario que el modelo NEP tenga una perspectiva más amplia que incorpore la interacción de aspectos tanto cualitativos como cuantitativos. La contribución de esta Tesis puede considerarse como un paso hacia adelante en una nueva etapa de los NEPs, donde el carácter cuantitativo del modelo es de primordial interés y donde existen posibilidades de un cambio visible en el enfoque de interés del dominio de los problemas a considerar: de las ciencias de la computación hacia la simulación/modelado biológico y viceversa, entre otros. El marco computacional que proponemos en esta Tesis extiende el modelo de las Redes de Procesadores Evolutivos (NEP) y define arquitectura inspirada en la definición de bloques funcionales del proceso de señalización celular para la solución de problemas computacionales complejos y el modelado de fenómenos celulares desde una perspectiva discreta. En particular, se proponen dos extensiones: (1) los Transductores basados en Redes de Procesadores Evolutivos (NEPT), y (2) las Redes Parametrizadas de Procesadores Evolutivos Polarizados (PNPEP). La conservación de las propiedades y el poder computacional tanto de NEPT como de PNPEP se demuestra formalmente. Varias simulaciones de procesos relacionados con la señalización celular son abordadas sintáctica y computacionalmente, con el _n de mostrar la aplicabilidad e idoneidad de estas dos extensiones. ABSTRACT Network of Evolutionary Processors -NEP was proposed in [Mitrana et al., 2001], as a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells accepted as survivors (correct) are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing inspired from the Connection Machine [Hillis, 1981], the Flow Logic Paradigm [Errico and Jesshope, 1994] and the Networks of Parallel Language Processors (RPPL) [Csuhaj-Varju and Salomaa, 1997]. Since the date when NEP was proposed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP) [Mitrana et al., 2012b]. During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated [Manea et al., 2007, Manea et al., 2010b, Mitrana and Martín-Vide, 2005]. Therefore, we can say that the NEP model has reached its maturity. Nevertheless, although the NEP model is biologically inspired, this model is mainly motivated by mathematical and computer science goals. In this context, the biological aspects are only considered from a qualitative and syntactical perspective. In view of this lack, it is important to try to keep the NEP theory as close as possible to the biological reality, extending their perspective incorporating the interplay of qualitative and quantitative aspects. The contribution of this Thesis, can be considered as a starting point in a new era of the NEP model. Then, the quantitative character of the NEP model is mandatory and it can address completely new different types of problems with respect to the classical computational domain (e.g. from the computer science to system biology). Therefore, the computational framework that we propose extends the NEP model and defines an architecture inspired by the functional blocks from cellular signaling in order to solve complex computational problems and cellular phenomena modeled from a discrete perspective. Particularly, we propose two extensions, namely: (1) Transducers based on Network of Evolutionary Processors (NEPT), and (2) Parametrized Network of Polarized Evolutionary Processors (PNPEP). Additionally, we have formally proved that the properties and computational power of NEP is kept in both extensions. Several simulations about processes related with cellular signaling both syntactical and computationally have been considered to show the model suitability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Una Red de Procesadores Evolutivos o NEP (por sus siglas en ingles), es un modelo computacional inspirado por el modelo evolutivo de las celulas, específicamente por las reglas de multiplicación de las mismas. Esta inspiración hace que el modelo sea una abstracción sintactica de la manipulation de information de las celulas. En particu¬lar, una NEP define una maquina de cómputo teorica capaz de resolver problemas NP completos de manera eficiente en tóerminos de tiempo. En la praóctica, se espera que las NEP simuladas en móaquinas computacionales convencionales puedan resolver prob¬lemas reales complejos (que requieran ser altamente escalables) a cambio de una alta complejidad espacial. En el modelo NEP, las cóelulas estóan representadas por palabras que codifican sus secuencias de ADN. Informalmente, en cualquier momento de cómputo del sistema, su estado evolutivo se describe como un coleccion de palabras, donde cada una de ellas representa una celula. Estos momentos fijos de evolucion se denominan configuraciones. De manera similar al modelo biologico, las palabras (celulas) mutan y se dividen en base a bio-operaciones sencillas, pero solo aquellas palabras aptas (como ocurre de forma parecida en proceso de selection natural) seran conservadas para la siguiente configuracióon. Una NEP como herramienta de computation, define una arquitectura paralela y distribuida de procesamiento simbolico, en otras palabras, una red de procesadores de lenguajes. Desde el momento en que el modelo fue propuesto a la comunidad científica en el año 2001, múltiples variantes se han desarrollado y sus propiedades respecto a la completitud computacional, eficiencia y universalidad han sido ampliamente estudiadas y demostradas. En la actualidad, por tanto, podemos considerar que el modelo teórico NEP se encuentra en el estadio de la madurez. La motivación principal de este Proyecto de Fin de Grado, es proponer una aproxi-mación práctica que permita dar un salto del modelo teórico NEP a una implantación real que permita su ejecucion en plataformas computacionales de alto rendimiento, con el fin de solucionar problemas complejos que demanda la sociedad actual. Hasta el momento, las herramientas desarrolladas para la simulation del modelo NEP, si bien correctas y con resultados satisfactorios, normalmente estón atadas a su entorno de ejecucion, ya sea el uso de hardware específico o implementaciones particulares de un problema. En este contexto, el propósito fundamental de este trabajo es el desarrollo de Nepfix, una herramienta generica y extensible para la ejecucion de cualquier algo¬ritmo de un modelo NEP (o alguna de sus variantes), ya sea de forma local, como una aplicación tradicional, o distribuida utilizando los servicios de la nube. Nepfix es una aplicacion software desarrollada durante 7 meses y que actualmente se encuentra en su segunda iteration, una vez abandonada la fase de prototipo. Nepfix ha sido disenada como una aplicacion modular escrita en Java 8 y autocontenida, es decir, no requiere de un entorno de ejecucion específico (cualquier maquina virtual de Java es un contenedor vólido). Nepfix contiene dos componentes o móodulos. El primer móodulo corresponde a la ejecución de una NEP y es por lo tanto, el simulador. Para su desarrollo, se ha tenido en cuenta el estado actual del modelo, es decir, las definiciones de los procesadores y filtros mas comunes que conforman la familia del modelo NEP. Adicionalmente, este componente ofrece flexibilidad en la ejecucion, pudiendo ampliar las capacidades del simulador sin modificar Nepfix, usando para ello un lenguaje de scripting. Dentro del desarrollo de este componente, tambióen se ha definido un estóandar de representacióon del modelo NEP basado en el formato JSON y se propone una forma de representation y codificación de las palabras, necesaria para la comunicación entre servidores. Adicional-mente, una característica importante de este componente, es que se puede considerar una aplicacion aislada y por tanto, la estrategia de distribution y ejecución son total-mente independientes. El segundo moódulo, corresponde a la distribucióon de Nepfix en la nube. Este de-sarrollo es el resultado de un proceso de i+D, que tiene una componente científica considerable. Vale la pena resaltar el desarrollo de este modulo no solo por los resul-tados prócticos esperados, sino por el proceso de investigation que se se debe abordar con esta nueva perspectiva para la ejecución de sistemas de computación natural. La principal característica de las aplicaciones que se ejecutan en la nube es que son gestionadas por la plataforma y normalmente se encapsulan en un contenedor. En el caso de Nepfix, este contenedor es una aplicacion Spring que utiliza el protocolo HTTP o AMQP para comunicarse con el resto de instancias. Como valor añadido, Nepfix aborda dos perspectivas de implementation distintas (que han sido desarrolladas en dos iteraciones diferentes) del modelo de distribution y ejecucion, que tienen un impacto muy significativo en las capacidades y restricciones del simulador. En concreto, la primera iteration utiliza un modelo de ejecucion asincrono. En esta perspectiva asincrona, los componentes de la red NEP (procesadores y filtros) son considerados como elementos reactivos a la necesidad de procesar una palabra. Esta implementation es una optimization de una topologia comun en el modelo NEP que permite utilizar herramientas de la nube para lograr un escalado transparente (en lo ref¬erente al balance de carga entre procesadores) pero produce efectos no deseados como indeterminacion en el orden de los resultados o imposibilidad de distribuir eficiente-mente redes fuertemente interconectadas. Por otro lado, la segunda iteration corresponde al modelo de ejecucion sincrono. Los elementos de una red NEP siguen un ciclo inicio-computo-sincronizacion hasta que el problema se ha resuelto. Esta perspectiva sincrona representa fielmente al modelo teórico NEP pero el proceso de sincronizacion es costoso y requiere de infraestructura adicional. En concreto, se requiere un servidor de colas de mensajes RabbitMQ. Sin embargo, en esta perspectiva los beneficios para problemas suficientemente grandes superan a los inconvenientes, ya que la distribuciín es inmediata (no hay restricciones), aunque el proceso de escalado no es trivial. En definitiva, el concepto de Nepfix como marco computacional se puede considerar satisfactorio: la tecnología es viable y los primeros resultados confirman que las carac-terísticas que se buscaban originalmente se han conseguido. Muchos frentes quedan abiertos para futuras investigaciones. En este documento se proponen algunas aproxi-maciones a la solucion de los problemas identificados como la recuperacion de errores y la division dinamica de una NEP en diferentes subdominios. Por otra parte, otros prob-lemas, lejos del alcance de este proyecto, quedan abiertos a un futuro desarrollo como por ejemplo, la estandarización de la representación de las palabras y optimizaciones en la ejecucion del modelo síncrono. Finalmente, algunos resultados preliminares de este Proyecto de Fin de Grado han sido presentados recientemente en formato de artículo científico en la "International Work-Conference on Artificial Neural Networks (IWANN)-2015" y publicados en "Ad-vances in Computational Intelligence" volumen 9094 de "Lecture Notes in Computer Science" de Springer International Publishing. Lo anterior, es una confirmation de que este trabajo mas que un Proyecto de Fin de Grado, es solo el inicio de un trabajo que puede tener mayor repercusion en la comunidad científica. Abstract Network of Evolutionary Processors -NEP is a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. NEP defines theoretical computing devices able to solve NP complete problems in an efficient manner. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells are accepted as surviving (correct) ones which are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing, in other words, a network of language processors. Since the date when NEP was pro¬posed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP). During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated. Therefore, we can say that the NEP model has reached its maturity. The main motivation for this End of Grade project (EOG project in short) is to propose a practical approximation that allows to close the gap between theoretical NEP model and a practical implementation in high performing computational platforms in order to solve some of high the high complexity problems society requires today. Up until now tools developed to simulate NEPs, while correct and successful, are usu¬ally tightly coupled to the execution environment, using specific software frameworks (Hadoop) or direct hardware usage (GPUs). Within this context the main purpose of this work is the development of Nepfix, a generic and extensible tool that aims to execute algorithms based on NEP model and compatible variants in a local way, similar to a traditional application or in a distributed cloud environment. Nepfix as an application was developed during a 7 month cycle and is undergoing its second iteration once the prototype period was abandoned. Nepfix is designed as a modular self-contained application written in Java 8, that is, no additional external dependencies are required and it does not rely on an specific execution environment, any JVM is a valid container. Nepfix is made of two components or modules. The first module corresponds to the NEP execution and therefore simulation. During the development the current state of the theoretical model was used as a reference including most common filters and processors. Additionally extensibility is provided by the use of Python as a scripting language to run custom logic. Along with the simulation a definition language for NEP has been defined based on JSON as well as a mechanisms to represent words and their possible manipulations. NEP simulator is isolated from distribution and as mentioned before different applications that include it as a dependency are possible, the distribution of NEPs is an example of this. The second module corresponds to executing Nepfix in the cloud. The development carried a heavy R&D process since this front was not explored by other research groups until now. It's important to point out that the development of this module is not focused on results at this point in time, instead we focus on feasibility and discovery of this new perspective to execute natural computing systems and NEPs specifically. The main properties of cloud applications is that they are managed by the platform and are encapsulated in a container. For Nepfix a Spring application becomes the container and the HTTP or AMQP protocols are used for communication with the rest of the instances. Different execution perspectives were studied, namely asynchronous and synchronous models were developed for solving different kind of problems using NEPs. Different limitations and restrictions manifest in both models and are explored in detail in the respective chapters. In conclusion we can consider that Nepfix as a computational framework is suc-cessful: Cloud technology is ready for the challenge and the first results reassure that the properties Nepfix project pursued were met. Many investigation branches are left open for future investigations. In this EOG implementation guidelines are proposed for some of them like error recovery or dynamic NEP splitting. On the other hand other interesting problems that were not in the scope of this project were identified during development like word representation standardization or NEP model optimizations. As a confirmation that the results of this work can be useful to the scientific com-munity a preliminary version of this project was published in The International Work- Conference on Artificial Neural Networks (IWANN) in May 2015. Development has not stopped since that point and while Nepfix in it's current state can not be consid¬ered a final product the most relevant ideas, possible problems and solutions that were produced during the seven months development cycle are worthy to be gathered and presented giving a meaning to this EOG work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La propuesta del trabajo de fin de grado escogida por el autor del proyecto se basa en la continuación del proyecto comenzado durante la asignatura de Prácticum del pasado semestre. El nacimiento del mismo se gestó en una pequeña empresa de consultoría llamada â˜Grupo Developâ (en la sección â˜Entidad colaboradora y ubicaciónâ se describe más detalladamente la organización) situada al este de Madrid, una organización con carácter de fundación y dedicada, esencialmente, al sector de la consultoría en el sector de la calidad. Grupo Develop necesitaba aprovechar la utilidad de las nuevas tecnologías para ofrecer un nuevo y mejor servicio para sus clientes mediante un proyecto que fuera dirigido y llevado a cabo por un ingeniero informático. Partiendo de este contexto se vislumbró un proyecto consistente en diseñar, desplegar, programar y mantener una plataforma-sistema informático capaz de ayudar a las organizaciones (en este caso particular la mayoría son organizaciones sin ánimo de lucro) a gestionarse mejor de acuerdo a distintos modelos de calidad como pueden ser el EFQM1 o ISO. Además, la certificación en ambos modelos son cada vez más demandados como garantía de calidad por organismos públicos y privados e incluso por los clientes. Por lo tanto, este programa debe llegar a ser una herramienta que realmente apoye a cada entidad a elaborar un diagnóstico de su gestión y, por supuesto, debe conseguir acercar a estas empresas a los certificados más prestigiosos. Desde el punto de vista de un profesional del sector de la informática, el proyecto se estructura de una forma clara en una arquitectura cliente-servidor clásica donde todas las entidades (de momento 15) han participado de forma activa y paralela al desarrollo del proyecto. Si bien es cierto que esto ha ralentizado notablemente el desarrollo del proyecto y ha requerido una sincronización entre dos proyectos paralelos (uno para el despliegue y otro para el desarrollo).---ABSTRACT---The proposal of my graduation work is a continuation of the project that I already started in the Practicum subject the first part of the academic year. The birth of that project arose in a small consulting firm (Grupo Develop) in the east of Madrid, an organization that at the same time is incorporated as a Foundation. This organization works with different NGO´s and their work covers the Quality Consulting Sector. Grupo Develop needed to use new technologies to give a new and best service to our customers through a project managed by a person trained in the computer science area. Starting from this context, Grupo Develop saw a new opportunity to create a project to design, deploy and maintain a platform-system that can help our customers (in this case the organizations are often organizations from the Third Sector) to get a best management in some Norms or Models like ISO or EFQM. Also in some cases the Certifications of any of these models are increasingly demanded as quality assurance by public entities, privates and even by clients. Therefore, this program should be a tool that helps to any organization to make a diagnosis of their management and, of course, should close to these companies to the most prestigious Certificates. From a point of view of a computer science student, the software project has a defined structure in a classic client-server model where all the organizations (15) have actively participated in parallel mode to the development of the project. It is true that the involvement of the organizations has slowed the development of the project so I had to create two projects (one for the different organizations and the other one to develop the major improvements).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objetivo seguido en este proyecto fin de carrera, ha sido el análisis de la evolución general de la tecnología, tanto electrónica, como en Telecomunicaciones e Informática, desde sus inicios hasta la actualidad, realizando un estudio sobre cómo esos avances han mejorado, incluso cambiado nuestra vida cotidiana y cómo a consecuencia de la conversión de muchos de estos productos tecnológicos en objeto de consumo masivo, la propia dinámica económico-comercial ha llevado a las empresas a la programación del fin del su vida útil con el objeto de incrementar su demanda y, consiguientemente, la producción. Se trata de lo que se conoce desde hace décadas como Obsolescencia Programada. Para realizar este documento se ha realizado un breve repaso por los acontecimientos históricos que promovieron el inicio de esta práctica, así como también nos ha llevado a realizar mención de todos esos aparatos tecnológicos que han sufrido una mayor evolución y por tanto son los más propensos a que se instauren en ellos, debido a cómo influyen dichos cambios en los consumidores. Otro de los aspectos más importantes que rodean a la Obsolescencia Programada, es cómo influye en el comportamiento de la sociedad, creando en ella actitudes consumistas, que precisamente son las que alimentan que el ciclo económico siga basándose en incluir esta práctica desde el diseño del producto, puesto que a mayor consumo, mayor producción, y por tanto, supone un aumento de beneficios para el fabricante. Aunque, lo que para algunos supone un amplio margen de rentabilidad, existen otros componentes de la ecuación que no salen tan bien parados, como pueden ser los países del tercer mundo que están siendo poblados de basura electrónica, y en general el medio ambiente y los recursos de los que hoy disponemos, que no son ilimitados, y que están viéndose gravemente afectados por la inserción de esta práctica. Como cierre de este análisis, se repasarán algunas de las técnicas empleadas en algunos de los productos más utilizados hoy en día y las conclusiones a las que se han llevado después de analizar todos estos puntos del tema. ABSTRACT. The main goal followed in this final project has been analyzing the overall evolution of technology, both electronics and telecommunications. Also, computer science; from beginning to present it has been realized some studies about these advances have improved our daily life, even changing our routine, and turn us massive consumers of all technological products on the market. This dynamic economic and commercial has led companies to programming the end of its useful life in order to increase demand, and consequently, the production. This is known as Obsolescence. To make this document has been made a brief review through historical events that promoted the opening of this practice, and also led us to make mention all these technological devices have undergone further evolution and therefore more likely to establish them, because these changes affect consumers. Another important aspect around planned obsolescence is how to influence in societyâs behavior, creating consumerist attitudes, which are precisely those feed the economic cycle continues to be based on including this practice from product design, because increased consumption and production. Therefore, this fact increases profits for the manufacturer. Although, for some people suppose a wide margin of profitability, there are other components do not have such luck; for example, third World countries have been filled with e-waste and the environment, in general, and resources available today which are not unlimited, and who are seriously affected by the inclusion of this practice. To close this analysis, we will review a few techniques used in some products most widely used today, and the conclusions that have been made after analyzing all these points of the topic.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new approach to the delineation of local labour markets based on evolutionary computation. The main objective is the regionalisation of a given territory into functional regions based on commuting flows. According to the relevant literature, such regions are defined so that (a) their boundaries are rarely crossed in daily journeys to work, and (b) a high degree of intra-area movement exists. This proposal merges municipalities into functional regions by maximizing a fitness function that measures aggregate intra-region interaction under constraints of inter-region separation and minimum size. Real results are presented based on the latest database from the Census of Population in the Region of Valencia. Comparison between the results obtained through the official method which currently is most widely used (that of British Travel-to-Work Areas) and those from our approach is also presented, showing important improvements in terms of both the number of different market areas identified that meet the statistical criteria and the degree of aggregate intra-market interaction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Given a territory composed of basic geographical units, the delineation of local labour market areas (LLMAs) can be seen as a problem in which those units are grouped subject to multiple constraints. In previous research, standard genetic algorithms were not able to find valid solutions, and a specific evolutionary algorithm was developed. The inclusion of multiple ad hoc operators allowed the algorithm to find better solutions than those of a widely-used greedy method. However, the percentage of invalid solutions was still very high. In this paper we improve that evolutionary algorithm through the inclusion of (i) a reparation process, that allows every invalid individual to fulfil the constraints and contribute to the evolution, and (ii) a hillclimbing optimisation procedure for each generated individual by means of an appropriate reassignment of some of its constituent units. We compare the results of both techniques against the previous results and a greedy method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Learning and teaching processes are continually changing. Therefore, design of learning technologies has gained interest in educators and educational institutions from secondary school to higher education. This paper describes the successfully use in education of social learning technologies and virtual laboratories designed by the authors, as well as videos developed by the students. These tools, combined with other open educational resources based on a blended-learning methodology, have been employed to teach the subject of Computer Networks. We have verified not only that the application of OERs into the learning process leads to a significantly improvement of the assessments, but also that the combination of several OERs enhances their effectiveness. These results are supported by, firstly, a study of both studentsâ opinion and studentsâ behaviour over five academic years, and, secondly, a correlation analysis between the use of OERs and the grades obtained by students.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A parallel algorithm to remove impulsive noise in digital images using heterogeneous CPU/GPU computing is proposed. The parallel denoising algorithm is based on the peer group concept and uses an Euclidean metric. In order to identify the amount of pixels to be allocated in multi-core and GPUs, a performance analysis using large images is presented. A comparison of the parallel implementation in multi-core, GPUs and a combination of both is performed. Performance has been evaluated in terms of execution time and Megapixels/second. We present several optimization strategies especially effective for the multi-core environment, and demonstrate significant performance improvements. The main advantage of the proposed noise removal methodology is its computational speed, which enables efficient filtering of color images in real-time applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Designing educational resources allow students to modify their learning process. In particular, on-line and downloadable educational resources have been successfully used in engineering education the last years [1]. Usually, these resources are free and accessible from web. In addition, they are designed and developed by lecturers and used by their students. But, they are rarely developed by students in order to be used by other students. In this work-in-progress, lecturers and students are working together to implement educational resources, which can be used by students to improve the learning process of computer networks subject in engineering studies. In particular, network topologies to model LAN (Local Area Network) and MAN (Metropolitan Area Network) are virtualized in order to simulate the behavior of the links and nodes when they are interconnected with different physical and logical design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este artículo aborda la investigación, realizada con los estudiantes del primer semestre de la titulación de Informática de la Facultad de Filosofía, Letras y Ciencias de la Educación de la Universidad Central del Ecuador, cuyo propósito ha sido analizar el uso de entornos de programación no mediados simbólicamente como herramienta didáctica para el desarrollo del pensamiento computacional. Se pretende establecer las posibles ventajas de aplicar este tipo de entorno para que los estudiantes desarrollen habilidades del pensamiento computacional tales como la creatividad, modelación y abstracción, entre otras, consideradas relevantes dentro de la programación. La metodología en que se apoyó la investigación es mixta, con investigación de campo y documental a nivel descriptivo. Se utilizó como instrumento un cuestionario para la recolección de datos entre el alumnado de la titulación. Finalmente, con la información recopilada se procedió al procesamiento de datos a partir de la estadística descriptiva para, así, obtener resultados que permitiesen alcanzar las pertinentes conclusiones y recomendaciones.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Machine vision is an important subject in computer science and engineering degrees. For laboratory experimentation, it is desirable to have a complete and easy-to-use tool. In this work we present a Java library, oriented to teaching computer vision. We have designed and built the library from the scratch with enfasis on readability and understanding rather than on efficiency. However, the library can also be used for research purposes. JavaVis is an open source Java library, oriented to the teaching of Computer Vision. It consists of a framework with several features that meet its demands. It has been designed to be easy to use: the user does not have to deal with internal structures or graphical interface, and should the student need to add a new algorithm it can be done simply enough. Once we sketch the library, we focus on the experience the student gets using this library in several computer vision courses. Our main goal is to find out whether the students understand what they are doing, that is, find out how much the library helps the student in grasping the basic concepts of computer vision. In the last four years we have conducted surveys to assess how much the students have improved their skills by using this library.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Object inspectors are an essential category of tools that allow developers to comprehend the run-time of object-oriented systems. Traditional object inspectors favor a generic view that focuses on the low-level details of the state of single objects. Based on 16 interviews with software developers and a follow-up survey with 62 respondents we identified a need for object inspectors that support different high-level ways to visualize and explore objects, depending on both the object and the current developer need. We propose the Moldable Inspector, a novel inspector model that enables developers to adapt the inspection workflow to suit their immediate needs by making the inspection context explicit, providing multiple interchangeable domain-specific views for each object, and supporting a workflow that groups together multiple levels of connected objects. We show that the Moldable Inspector can address multiple kinds of development needs involving a wide range of objects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding the run-time behaviour of object-oriented applications entails the comprehension of run-time objects. Traditional object inspectors favor generic views that focus on the low-level details of the state of single objects. While universally applicable, this generic approach does not take into account the varying needs of developers that could benefit from tailored views and exploration possibilities. GTInspector is a novel moldable object inspector that provides different high-level ways to visualize and explore objects, adapted to both the object and the current developer need.