950 resultados para Experts Architectures


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relatório de estágio de mestrado em Ensino de Informática

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent decades, an increased interest has been evidenced in the research on multi-scale hierarchical modelling in the field of mechanics, and also in the field of wood products and timber engineering. One of the main motivations for hierar-chical modelling is to understand how properties, composition and structure at lower scale levels may influence and be used to predict the material properties on a macroscopic and structural engineering scale. This chapter presents the applicability of statistic and probabilistic methods, such as the Maximum Likelihood method and Bayesian methods, in the representation of timber’s mechanical properties and its inference accounting to prior information obtained in different importance scales. These methods allow to analyse distinct timber’s reference properties, such as density, bending stiffness and strength, and hierarchically consider information obtained through different non, semi or destructive tests. The basis and fundaments of the methods are described and also recommendations and limitations are discussed. The methods may be used in several contexts, however require an expert’s knowledge to assess the correct statistic fitting and define the correlation arrangement between properties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current data mining engines are difficult to use, requiring optimizations by data mining experts in order to provide optimal results. To solve this problem a new concept was devised, by maintaining the functionality of current data mining tools and adding pervasive characteristics such as invisibility and ubiquity which focus on their users, providing better ease of use and usefulness, by providing autonomous and intelligent data mining processes. This article introduces an architecture to implement a data mining engine, composed by four major components: database; Middleware (control); Middleware (processing); and interface. These components are interlinked but provide independent scaling, allowing for a system that adapts to the user’s needs. A prototype has been developed in order to test the architecture. The results are very promising and showed their functionality and the need for further improvements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concepts involved in sustainable textile fashion, demanding good knowledge about raw materials, processes, end use properties and circuits amongst others, are able to determine the way the textile product is designed and the behavior of the consumer, regarding life style and buying decisions. The textile product`s life integrates raw materials, their processing, distribution, use by the consumer and destination of the product after useful lifetime, this is, his complete life cycle. It is very important to recognize the power of the consumer to influence parameters related to sustainability, namely when he decides how, when and why he buys and afterwards by the attitudes taken during and after use. The conscious act of consumption involves ethical, ecological and technical knowledge in which the concern is overall lifecycle of the fashion product and not exclusively aesthetic and symbolic values strongly related with its ephemeral nature. The present work proposes the classification of textile products by means of an innovative label aiming to establish a rating related to the Life of Fashion Products, by using parameters considered with especial impact in lifecycle, as textile fibers, processing conditions, generated wastes, commercialization circuits, durability and cleaning procedures. This label for sustainable fashion products aims to assist the stakeholders with informed attitudes and correct decisions in order to promote the objectives of sustainable fashion near designers, consumers and industrial experts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Com o intuito de contribuir para a qualificação do ensino escolar da Química na República de Angola, a presente investigação analisou as caraterísticas do Conhecimento Didático do Conteúdo e a Qualidade de Ensino dos professores considerados peritos naquela área disciplinar. A questão de partida que orientou o estudo foi a seguinte: “O que é que carateriza o Conhecimento Didático do Conteúdo dos professores peritos de Química e qual a sua relação com a Qualidade de Ensino?”. A investigação implementada é de tipo quasi-experimental, com um caráter descritivo e exploratório. A amostra foi constituída por grupos de professores e alunos (peritos e não peritos). Os dados recolheram-se por entrevistas aos professores peritos e a observação das suas aulas, por captação de imagens; por questionários e testes de avaliação aos alunos dos dois grupos. A análise dos resultados obedeceu a metodologia quantitativa e qualitativa. Os resultados revelam que, os professores peritos reúnem requisitos caraterizados pelas suas intervenções de pedagogias mais ativas que os tornam mais eficazes. As caraterísticas das suas intervenções propiciam melhorias na qualidade de ensino. Contudo, as conclusões gerais implicam a necessidade de formação dos professores, de modo a melhorar a qualidade de ensino da Química em Angola.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La producción de leche de cabra es considerada en nuestro país, y en la provincia de Córdoba, una alternativa productiva para el desarrollo sustentable y socio – económico de la población. Por otra parte, existe una mayor demanda del mercado nacional e internacional de esta leche, por lo que los productores deben garantizar la seguridad y calidad de la misma de acuerdo a las normas vigentes. Es por ello que el control y tratamiento de las diferentes enfermedades es de vital importancia tanto para maximizar la producción del hato como para cumplir con los cánones de seguridad exigidos. En este contexto la mastitis caprina es una de las enfermedades que afecta la productividad del sector, y para controlarla una de las medidas a emplear es la terapéutica con antimicrobianos. Se trabajará en este proyecto con marbofloxacina y cefquinoma, estableciendo pautas racionales (eficaces y seguras) para su empleo en la afección a nivel regional. Los indicadores de eficacia estarán fijados de acuerdo a los parámetros integrados de farmacocinética (FC) y farmacodinamia (FD). Estos últimos (FD) serán calculados a través de la determinación concentraciones inhibitorias mínimas de cepas bacterianas aisladas de mastitis caprinas en Córdoba. Se establecerán los parámetros farmacocinéticos a dosis únicas y múltiples para la marbofloxacina (5 mg/kg IV, IM) y cefquinoma (2 mg/kg IV, IM e IMM) a partir de muestras de suero y leche de cabras Anglo Nubian (n = 6 por antimicrobiano; diseño cruzado en función de la ruta de administración). Se determinarán sus concentraciones en dichos fluidos, por cromatografía líquida de alta precisión. Los resultados FC/FD para ambos medicamentos se compararán con parámetros recomendados por expertos para cada tipo de antimicrobiano y se utilizarán como medida para recomendar una terapéutica racional, fundamental para optimizar la posología, garantizar la eficacia clínica, y reducir al mínimo la selección y propagación de cepas resistentes de agentes patógenos. The production of milk of goat is considered the province of Cordoba, a productive alternative for the sustainable development and partner - economically of the population. There is a major demand of the domestic and international market of this milk, for what the producers must guarantee the safety and quality of the same one of agreement to the in force procedure. It is for it that the control and treatment of the different diseases performs vital importance so much to maximize the production of the herd as to expire with the safety demanded. In this context the mastitis goat is one of the diseases that affect the productivity of the sector, and to control her one of the measures to using is the therapeutics with antimicrobial. One will be employed at this project with marbofloxacine and cefquinome, establishing rational guidelines (effective and sure) for his employment in the affection to regional level. The indicators of efficiency will be fixed in agreement to the integrated parameters of pharmacokinetics (PK) and pharmacodinamics (PD). The latter (PK) will be calculated across the determination inhibitory minimal concentrations of bacterial strains isolated of mastitis goat in Córdoba. The parameters will be established pharmacokinetics to the only and multiple doses for the marbofloxacine (5 mg/kg the IV, IM) and cefquinome (2 mg/kg the IVth, IM and IMM), From samples of whey and milk of goats Anglo Nubian (n = 6 for antimicrobial; design crossed depending on the route of administration). Its concentrations will decide in the above mentioned fluids, for liquid chromatography of high precision. The results PK/PD for both antimicrobial will be compared with parameters recommended by experts for every type of antimicrobial and will be in use as measure for recommending a rational, fundamental therapeutics for optimizing the dosage, for guaranteeing the clinical efficiency, and to reduce to the minimum the selection and spread of resistant of pathogenic agents.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main objective of this thesis on flooding was to produce a detailed report on flooding with specific reference to the Clare River catchment. Past flooding in the Clare River catchment was assessed with specific reference to the November 2009 flood event. A Geographic Information System was used to produce a graphical representation of the spatial distribution of the November 2009 flood. Flood risk is prominent within the Clare River catchment especially in the region of Claregalway. The recent flooding events of November 2009 produced significant fluvial flooding from the Clare River. This resulted in considerable flood damage to property. There were also hidden costs such as the economic impact of the closing of the N17 until floodwater subsided. Land use and channel conditions are traditional factors that have long been recognised for their effect on flooding processes. These factors were examined in the context of the Clare River catchment to determine if they had any significant effect on flood flows. Climate change has become recognised as a factor that may produce more significant and frequent flood events in the future. Many experts feel that climate change will result in an increase in the intensity and duration of rainfall in western Ireland. This would have significant implications for the Clare River catchment, which is already vulnerable to flooding. Flood estimation techniques are a key aspect in understanding and preparing for flood events. This study uses methods based on the statistical analysis of recorded data and methods based on a design rainstorm and rainfall-runoff model to estimate flood flows. These provide a mathematical basis to evaluate the impacts of various factors on flooding and also to generate practical design floods, which can be used in the design of flood relief measures. The final element of the thesis includes the author’s recommendations on how flood risk management techniques can reduce existing flood risk in the Clare River catchment. Future implications to flood risk due to factors such as climate change and poor planning practices are also considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The twin objectives of the work described were to construct nutrient balance models (NBM) for a range of Irish animal production systems and to evaluate their potential as a means of estimating the nutrient composition of farm wastes. The NBM has three components. The first is the intake of nutrients in the animal's diet. The second is retention or the nutrients the animal retains for the production of milk, meat or eggs. The third is the balance or the difference between the nutrient intake and retention. Data on the intake levels and their nutrient value for dairy cows, beef cattle, pigs and poultry systems were assembled. Literature searches and interviews with National experts were the primary sources of information. NBMs were then constructed for each production system. Summary tables of the nutrient values for the common diet constituents used in Irish animal production systems, the nutrient composition of the animal products and the NBMs (nutrient intake, retention and excretion) for a range of production systems were assembled. These represent the first comprehensive data set of this type for Irish animal production systems. There was generally good agreement between the derived NBMs values and those published in the literature. The NBMs were validated on a number of farms. Data on animal numbers, fertiliser use, concentrates inputs and production output were recorded on seven farms. Using the data a nutrient input/output balance was constructed for each farm. This was compared with the NBM estimate of the farm nutrient balance. The results showed good agreement between the measured balance and the NBM estimate particularly for the pig and poultry farms. However, the validation emphasised the inherent risks associated with NBMs. The average values used for feed intake and production parameters in the NEMs may result in the under or over estimate of actual nutrient balances on individual farms where these variables are substantially different. On the grassland farms there was a poor correlation between the input/output estimate and the NBM. This possibly results from the omission of the soil's contribution to the nutrient balance. However, the results indicate that the NBMs developed are a potentially useful tool for estimating nutrient balances. They also will serve to highlight the significant fraction of the nutrient inputs into farming systems that are retained on the farm. The potential of the NBM as a means of estimating the nutrient composition of farm wastes was evaluated on two farms. Feed intake and composition, animal production, slurry production was monitored during the indoor winter feeding period. Slurry samples were taken for analysis. The appropriates NBMs were used to estimate the nutrient balance for each farm. The nutrient content of the slurry produced was calculated. There was a good agreement between the NBM estimate and the measured values. This preliminary evaluation suggests that the NBM has a potential to provide the farmer with a simple means of estimating the nutrient value of his slurry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The overall purpose of this study was to develop a thorough inspection regime for onsite wastewater treatment systems, which is practical and could be implemented on all site conditions across the country. With approximately 450,000 onsite wastewater treatment systems in Ireland a risk based methodology is required for site selection. This type of approach will identify the areas with the highest potential risk to human health and the environment and these sites should be inspected first. In order to gain the required knowledge to develop an inspection regime in-depth and extensive research was earned out. The following areas of pertinent interest were examined and reviewed, history of domestic wastewater treatment, relevant wastewater legislation and guidance documents and potential detrimental impacts. Analysis of a questionnaire from a prior study, which assessed the resources available and the types of inspections currently undertaken by Local authorities was carried out. In addition to the analysis of the questionnaire results, interviews were carried out with several experts involved in the area of domestic wastewater treatment. The interview focussed on twelve key questions which were directed towards the expert’s opinions on the vital aspects of developing an inspection regime. The background research, combined with the questionnaire analysis and information from the interviews provided a solid foundation for the development of an inspection regime. Chapter 8 outlines the inspection regime which has been developed for this study. The inspection regime includes a desktop study, consultation with the homeowners, visual site inspection, non-invasive site tests, and inspection of the treatment systems. The general opinion from the interviews carried out, was that a standardised approach for the inspections was necessary. For this reason an inspection form was produced which provides a standard systematic approach for inspectors to follow. This form is displayed in Appendix 3. The development of a risk based methodology for site selection was discussed and a procedure similar in approach to the Geological Survey of Irelands Groundwater Protection Schemes was proposed. The EPA is currently developing a risk based methodology, but it is not available to the general public yet. However, the EPA provided a copy of a paper outlining the key aspects of their methodology. The methodology will use risk maps which take account of the following parameters: housing density, areas with inadequate soil conditions, risk of water pollution through surface and subsurface pathways. Sites identified with having the highest potential risk to human health and the environment shall be inspected first. Based on the research carried out a number of recommendations were made which are outlined in Chapter 10. The principle conclusion was that, if these systems fail to operate satisfactorily, home owners need to understand that these systems dispose of the effluent to the 'ground' and the effluent becomes part of the hydrological cycle; therefore, they are a potential hazard to the environment and human health. It is the owners, their families and their neighbours who will be at most immediate risk.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this document, the Inter-American Committee of Cardiovascular Prevention and Rehabilitation, together with the South American Society of Cardiology, aimed to formulate strategies, measures, and actions for cardiovascular disease prevention and rehabilitation (CVDPR). In the context of the implementation of a regional and national health policy in Latin American countries, the goal is to promote cardiovascular health and thereby decrease morbidity and mortality. The study group on Cardiopulmonary and Metabolic Rehabilitation from the Department of Exercise, Ergometry, and Cardiovascular Rehabilitation of the Brazilian Society of Cardiology has created a committee of experts to review the Portuguese version of the guideline and adapt it to the national reality. The mission of this document is to help health professionals to adopt effective measures of CVDPR in the routine clinical practice. The publication of this document and its broad implementation will contribute to the goal of the World Health Organization (WHO), which is the reduction of worldwide cardiovascular mortality by 25% until 2025. The study group's priorities are the following: • Emphasize the important role of CVDPR as an instrument of secondary prevention with significant impact on cardiovascular morbidity and mortality; • Join efforts for the knowledge on CVDPR, its dissemination, and adoption in most cardiovascular centers and institutes in South America, prioritizing the adoption of cardiovascular prevention methods that are comprehensive, practical, simple and which have a good cost/benefit ratio; • Improve the education of health professionals and patients with education programs on the importance of CVDPR services, which are directly targeted at the health system, clinical staff, patients, and community leaders, with the aim of decreasing the barriers to CVDPR implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An appropriate assessment of end-to-end network performance presumes highly efficient time tracking and measurement with precise time control of the stopping and resuming of program operation. In this paper, a novel approach to solving the problems of highly efficient and precise time measurements on PC-platforms and on ARM-architectures is proposed. A new unified High Performance Timer and a corresponding software library offer a unified interface to the known time counters and automatically identify the fastest and most reliable time source, available in the user space of a computing system. The research is focused on developing an approach of unified time acquisition from the PC hardware and accordingly substituting the common way of getting the time value through Linux system calls. The presented approach provides a much faster means of obtaining the time values with a nanosecond precision than by using conventional means. Moreover, it is capable of handling the sequential time value, precise sleep functions and process resuming. This ability means the reduction of wasting computer resources during the execution of a sleeping process from 100% (busy-wait) to 1-1.5%, whereas the benefits of very accurate process resuming times on long waits are maintained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniques for maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables, and an approach for performing parallel addition of N input symbols.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we investigate various algorithms for performing Fast Fourier Transformation (FFT)/Inverse Fast Fourier Transformation (IFFT), and proper techniquesfor maximizing the FFT/IFFT execution speed, such as pipelining or parallel processing, and use of memory structures with pre-computed values (look up tables -LUT) or other dedicated hardware components (usually multipliers). Furthermore, we discuss the optimal hardware architectures that best apply to various FFT/IFFT algorithms, along with their abilities to exploit parallel processing with minimal data dependences of the FFT/IFFT calculations. An interesting approach that is also considered in this paper is the application of the integrated processing-in-memory Intelligent RAM (IRAM) chip to high speed FFT/IFFT computing. The results of the assessment study emphasize that the execution speed of the FFT/IFFT algorithms is tightly connected to the capabilities of the FFT/IFFT hardware to support the provided parallelism of the given algorithm. Therefore, we suggest that the basic Discrete Fourier Transform (DFT)/Inverse Discrete Fourier Transform (IDFT) can also provide high performances, by utilizing a specialized FFT/IFFT hardware architecture that can exploit the provided parallelism of the DFT/IDF operations. The proposed improvements include simplified multiplications over symbols given in polar coordinate system, using sinе and cosine look up tables,and an approach for performing parallel addition of N input symbols.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study presents a research about the participation of psicomotricist in the play with children to find out what kind of conditions improve the development of their capacitys of relathionship. The research has four parts: 1) A teoretical revision about the terms of game, social participation and relathionship; 2) A case study based on different interviews with experts, psicomotricists and the observed psicomotricist and 4 deep observations 3) The categories of analysis from the triangulation of teoretical and case study sources of information; 4) Discussion and conclusions about general actuations of the psicomotricist, about specifical actuations with the children and about psicomotricist’s believes.