936 resultados para HRM capabilities
Resumo:
This paper presents an automated optimization framework able to provide network administrators with resilient routing configurations for link-state protocols, such as OSPF or IS-IS. In order to deal with the formulated NP-hard optimization problems, the devised framework is underpinned by the use of computational intelligence optimization engines, such as Multi-objective Evolutionary Algorithms (MOEAs). With the objective of demonstrating the framework capabilities, two illustrative Traffic Engineering methods are described, allowing to attain routing configurations robust to changes in the traffic demands and maintaining the network stable even in the presence of link failure events. The presented illustrative results clearly corroborate the usefulness of the proposed automated framework along with the devised optimization methods.
Resumo:
Relatório de estágio de mestrado em Ensino de Música
Resumo:
Relatório de estágio de mestrado em Ensino de Português no 3º Ciclo do Ensino Básico e Secundário e de Espanhol nos Ensinos Básico e Secundário
Resumo:
Dissertação de mestrado em Bioengenharia
Resumo:
Dissertação de Mestrado em Estratégia
Resumo:
A partir de las últimas décadas se ha impulsado el desarrollo y la utilización de los Sistemas de Información Geográficos (SIG) y los Sistemas de Posicionamiento Satelital (GPS) orientados a mejorar la eficiencia productiva de distintos sistemas de cultivos extensivos en términos agronómicos, económicos y ambientales. Estas nuevas tecnologías permiten medir variabilidad espacial de propiedades del sitio como conductividad eléctrica aparente y otros atributos del terreno así como el efecto de las mismas sobre la distribución espacial de los rendimientos. Luego, es posible aplicar el manejo sitio-específico en los lotes para mejorar la eficiencia en el uso de los insumos agroquímicos, la protección del medio ambiente y la sustentabilidad de la vida rural. En la actualidad, existe una oferta amplia de recursos tecnológicos propios de la agricultura de precisión para capturar variación espacial a través de los sitios dentro del terreno. El óptimo uso del gran volumen de datos derivado de maquinarias de agricultura de precisión depende fuertemente de las capacidades para explorar la información relativa a las complejas interacciones que subyacen los resultados productivos. La covariación espacial de las propiedades del sitio y el rendimiento de los cultivos ha sido estudiada a través de modelos geoestadísticos clásicos que se basan en la teoría de variables regionalizadas. Nuevos desarrollos de modelos estadísticos contemporáneos, entre los que se destacan los modelos lineales mixtos, constituyen herramientas prometedoras para el tratamiento de datos correlacionados espacialmente. Más aún, debido a la naturaleza multivariada de las múltiples variables registradas en cada sitio, las técnicas de análisis multivariado podrían aportar valiosa información para la visualización y explotación de datos georreferenciados. La comprensión de las bases agronómicas de las complejas interacciones que se producen a la escala de lotes en producción, es hoy posible con el uso de éstas nuevas tecnologías. Los objetivos del presente proyecto son: (l) desarrollar estrategias metodológicas basadas en la complementación de técnicas de análisis multivariados y geoestadísticas, para la clasificación de sitios intralotes y el estudio de interdependencias entre variables de sitio y rendimiento; (ll) proponer modelos mixtos alternativos, basados en funciones de correlación espacial de los términos de error que permitan explorar patrones de correlación espacial de los rendimientos intralotes y las propiedades del suelo en los sitios delimitados. From the last decades the use and development of Geographical Information Systems (GIS) and Satellite Positioning Systems (GPS) is highly promoted in cropping systems. Such technologies allow measuring spatial variability of site properties including electrical conductivity and others soil features as well as their impact on the spatial variability of yields. Therefore, site-specific management could be applied to improve the efficiency in the use of agrochemicals, the environmental protection, and the sustainability of the rural life. Currently, there is a wide offer of technological resources to capture spatial variation across sites within field. However, the optimum use of data coming from the precision agriculture machineries strongly depends on the capabilities to explore the information about the complex interactions underlying the productive outputs. The covariation between spatial soil properties and yields from georeferenced data has been treated in a graphical manner or with standard geostatistical approaches. New statistical modeling capabilities from the Mixed Linear Model framework are promising to deal with correlated data such those produced by the precision agriculture. Moreover, rescuing the multivariate nature of the multiple data collected at each site, several multivariate statistical approaches could be crucial tools for data analysis with georeferenced data. Understanding the basis of complex interactions at the scale of production field is now within reach the use of these new techniques. Our main objectives are: (1) to develop new statistical strategies, based on the complementarities of geostatistics and multivariate methods, useful to classify sites within field grown with grain crops and analyze the interrelationships of several soil and yield variables, (2) to propose mixed linear models to predict yield according spatial soil variability and to build contour maps to promote a more sustainable agriculture.
Resumo:
Los caracteres de historia de vida son sensibles a la variación histórica o actual de los factores ambientales. Estudiar dicha variabilidad mediante la realización de estudios comparativos permite obtener evidencias sobre las causas de la evolución de ciertos caracteres. Los lagartos son excelentes modelos para el estudio de selección sexual y evolución del comportamiento social y reproductivo debido a que su relativa baja dispersión podría tener consecuencias evolutivas profundas en el desarrollo de distintas estrategias, ya que las poblaciones, al encontrarse más aisladas, podrían verse influenciadas por las fuerzas selectivas locales, mostrando una alta heterogeneidad espacial y temporal. Por eso nos propusimos realizar este trabajo para evaluar si existen diferentes estrategias reproductivas en los lagartos del género Tupinambis en distintos contextos ecológicos de la provincia de Córdoba. Para ello analizaremos distintas características de la historia de vida en poblaciones de estas especies tales como estructura de tamaño, sexo operativo, frecuencia reproductiva, tamaño de camada, condición corporal reproductiva, tamaño de madurez sexual, características espermáticas, elección de sitios de nidificación, etc. Además analizaremos la estructura genética de las poblaciones para inferir procesos demográficos históricos y patrones actuales de flujo génico y conectividad. The life history traits are sensitive to historical or current variation of environmental factors. Studying this variability by performing comparative studies allows obtaining evidence on the causes of the evolution of certain characters. Lizards are excellent models for studying sexual selection and evolution of social and reproductive behavior because their relatively low dispersal capabilities could have profound evolutionary consequences in the development of different strategies, since isolated populations may be stronger influenced by local selective forces, showing a high spatial and temporal heterogeneity. We decided to perform this study to assess whether there are different reproductive strategies in lizards of the genus Tupinambis in different ecological contexts of the Cordoba province. We will analyze different life history traits in populations of these species such as size structure, operational sex ratio, reproductive frequency, litter size, body condition, size at sexual maturity, sperm characteristics, choice of nesting sites, etc.. We also analyzed the genetic structure of populations to infer historical demographic processes and current patterns of gene flow and connectivity.
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
The sustained economic growth that has been experienced in the Irish economy in recent years has relied, to a large extent, on the contribution and performance of those industry sectors that possess the ability to provide high-value-added products and services to domestic and international markets. One such contributor has been the Technology sector. However, the performance of this sector relies upon the availability of the necessary capabilities and competencies for Technology companies to remain competitive. The Expert Group on Future Skills Needs have forecasted future skills shortages in this sector. The purpose of this research has been to examine the extent to which Irish Technology companies are taking measures to meet changing skills requirements, through training and development interventions. Survey research methods (in the form of a mail questionnaire, supported by a Web-based questionnaire) have been used to collect information on the expenditure on, and approach to, training and development in these companies, in addition to the methods, techniques and tools/aids that are used to support the delivery of these activities. The contribution of Government intervention has also been examined. The conclusions have been varied. When the activities of the responding companies are considered in isolation, the picture to emerge is primarily positive. Although the expenditure on training and development is slightly lower than that indicated in previous studies, the results vary by company size. Technical employees are clearly the key focus of training provision, while Senior Managers and Directors, Clerical and Administrative staff and Manual workers are a great deal more neglected in training provision. Expenditure on, and use of, computer-based training methods is high, as is the use of most of the specified techniques for facilitating learning. However, when one considers the extent to which external support (in the form of Government interventions and cooperation with other companies and with education and training providers) is integrated into the overall training practices of these companies, significant gaps in practice are identified. The thesis concludes by providing a framework to guide future training and development practices in the Technology sector.
Resumo:
The objective of this thesis is to compare and contrast environmental licensing systems, for the wood panel industry, in a number of countries in order to determine which system is the best from an environmental and economic point of view. The thesis also examines the impact which government can have on industry and the type of licensing system in operation in a country. Initially, the thesis investigates the origins of the various environmental licensing systems which are in operation in Ireland, Scotland, Wales, France, USA and Canada. It then examines the Environmental Agencies which control and supervise industry in these countries. The impact which the type of government (i.e. unitary or federal) in charge in any particular country has on industry and the Regulatory Agency in that country is then described. Most of the mills in the thesis make a product called OSB (Oriented Strand Board) and the manufacturing process is briefly described in order to understand where the various emissions are generated. The main body of the thesis examines a number of environmental parameters which have emission limit values in the licenses examined, although not all of these parameters have emission limit values in all of the licenses. All of these parameters are used as indicators of the potential impact which the mill can have on the environment. They have been set at specific levels by the Environmental Agencies in the individual countries to control the impact of the mill. Following on from this, the two main types of air pollution control equipment (WESPs and RTOs) are described in regard to their function and capabilities. The mill licenses are then presented in the form of results tables which compare air results and water results separately. This is due to the fact that the most significant emission from this type of industry is to air. A matrix system is used to compare the licenses so that the comparison can be as objective as possible. The discussion examines all of the elements previously described and from this it was concluded that the IPC licensing system is the best from an environmental and economic point of view. It is a much more expensive system to operate than the other systems examined, but it is much more comprehensive and looks at the mill as a whole rather than fragmenting it. It was also seen that the type of environmental licensing system which is in place in a country can play a role in the locating of an industry as certain systems were seen to have more stringent standards attached to them. The type of standard in place in a country is in turn influenced by the type of government which is in place in that country.
Resumo:
This is a study of a state of the art implementation of a new computer integrated testing (CIT) facility within a company that designs and manufactures transport refrigeration systems. The aim was to use state of the art hardware, software and planning procedures in the design and implementation of three CIT systems. Typical CIT system components include data acquisition (DAQ) equipment, application and analysis software, communication devices, computer-based instrumentation and computer technology. It is shown that the introduction of computer technology into the area of testing can have a major effect on such issues as efficiency, flexibility, data accuracy, test quality, data integrity and much more. Findings reaffirm how the overall area of computer integration continues to benefit any organisation, but with more recent advances in computer technology, communication methods and software capabilities, less expensive more sophisticated test solutions are now possible. This allows more organisations to benefit from the many advantages associated with CIT. Examples of computer integration test set-ups and the benefits associated with computer integration have been discussed.
Resumo:
There are presently over 182 RBC plants, treating domestic wastewater, in the Republic of Ireland, 136 of which have been installed since 1986. The use of this treatment plant technology, although not new, is becoming increasingly popular. The aim of this research was to assess the effects that a household detergent has on rotating biological contractor treatment plant efficiency. Household detergents contribute phosphorus to the surrounding environment and can also remove beneficial biomass from the disc media. A simple modification was made to a conventional flat disc unit to increase the oxygen transfer of the process. The treatment efficiency of the modified RBC (with aeration cups attached) was assessed against a parallel conventional system, with and without degergent loading. The parameters monitored were chemical oxygen demand (COD), bio-chemical oxygen demand (BOD), nitrates, phosphates, dissolved oxygen, the motors power consumption, pH, and temperature. Some microscopic analysis of the biofilm was also to be carried out. The treatment efficiency of both units was compared, based on COD/BOD removal. The degree of nitrification achievable by both units was also assessed with any fluctuations in pH noted. Monitoring of the phosphorus removal capabilities of both units was undertaken. Relationships between detergent concentrations and COD removal efficiencies were also analysed.
Resumo:
In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniques for maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables, and an approach for performing parallel addition of N input symbols.
Resumo:
In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniquesfor maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables,and an approach for performing parallel addition of N input symbols.
Resumo:
Variational steepest descent approximation schemes for the modified Patlak-Keller-Segel equation with a logarithmic interaction kernel in any dimension are considered. We prove the convergence of the suitably interpolated in time implicit Euler scheme, defined in terms of the Euclidean Wasserstein distance, associated to this equation for sub-critical masses. As a consequence, we recover the recent result about the global in time existence of weak-solutions to the modified Patlak-Keller-Segel equation for the logarithmic interaction kernel in any dimension in the sub-critical case. Moreover, we show how this method performs numerically in one dimension. In this particular case, this numerical scheme corresponds to a standard implicit Euler method for the pseudo-inverse of the cumulative distribution function. We demonstrate its capabilities to reproduce easily without the need of mesh-refinement the blow-up of solutions for super-critical masses.