53 resultados para Algorithms, Properties, the KCube Graphs

em Instituto Politécnico do Porto, Portugal


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fractional calculus (FC) is currently being applied in many areas of science and technology. In fact, this mathematical concept helps the researches to have a deeper insight about several phenomena that integer order models overlook. Genetic algorithms (GA) are an important tool to solve optimization problems that occur in engineering. This methodology applies the concepts that describe biological evolution to obtain optimal solution in many different applications. In this line of thought, in this work we use the FC and the GA concepts to implement the electrical fractional order potential. The performance of the GA scheme, and the convergence of the resulting approximation, are analyzed. The results are analyzed for different number of charges and several fractional orders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, it is studied the dynamics of the robotic bird in terms of time response and robustness. It is analyzed the wing angle of attack and the velocity of the bird, the tail influence, the gliding flight and the flapping flight. The results are positive for the construction of flying robots. The development of computational simulation based on the dynamic of the robotic bird should allow testing strategies and different algorithms of control such as integer and fractional controllers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The replacement of conventional synthetic films and coatings by biodegradable alternatives reduces the use of non-renewable resources and waste disposal problems. Considering that Portugal is a major producer of leather, and consequently a large producer of related wastes, in this research, bovine hair was tested for the production of biodegradable films directly by thermo-compression, allowing waste valorisation and reduction of environmental pollution. The aim of this study was to determine the influence of the different pre-treatments performed by two processes (removal by mechanical action and removal by chemical process), applied to bovine hair, in order to obtain a biodegradable film with appropriate properties. Mechanical properties for these films were evaluated, namely strain at break, stress at break and Young modulus. Additionally colour, solubility and swelling in water were also studied. The mechanical removal hair only produced films with Na2S treatment. Chemical removed hair (immunization) depends of the pre-treatment and the degreasing with petroleum ether or sodium sulphide pre-treatment leads better mechanical properties. The results obtained indicated that the pre-treatments have an important role in the final properties of biodegradable films.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A dc magnetron sputtering-based method to grow high-quality Cu2ZnSnS4 (CZTS) thin films, to be used as an absorber layer in solar cells, is being developed. This method combines dc sputtering of metallic precursors with sulfurization in S vapour and with post-growth KCN treatment for removal of possible undesired Cu2−xS phases. In this work, we report the results of a study of the effects of changing the precursors’ deposition order on the final CZTS films’ morphological and structural properties. The effect of KCN treatment on the optical properties was also analysed through diffuse reflectance measurements. Morphological, compositional and structural analyses of the various stages of the growth have been performed using stylus profilometry, SEM/EDS analysis, XRD and Raman Spectroscopy. Diffuse reflectance studies have been done in order to estimate the band gap energy of the CZTS films. We tested two different deposition orders for the copper precursor, namely Mo/Zn/Cu/Sn and Mo/Zn/Sn/Cu. The stylus profilometry analysis shows high average surface roughness in the ranges 300–550 nm and 230–250 nm before and after KCN treatment, respectively. All XRD spectra show preferential growth orientation along (1 1 2) at 28.45◦. Raman spectroscopy shows main peaks at 338 cm−1 and 287 cm−1 which are attributed to Cu2ZnSnS4. These measurements also confirm the effectiveness of KCN treatment in removing Cu2−xS phases. From the analysis of the diffuse reflectance measurements the band gap energy for both precursors’ sequences is estimated to be close to 1.43 eV. The KCN-treated films show a better defined absorption edge; however, the band gap values are not significantly affected. Hot point probe measurements confirmed that CZTS had p-type semiconductor behaviour and C–V analysis was used to estimate the majority carrier density giving a value of 3.3 × 1018 cm−3.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we discuss challenges and design principles of an implementation of slot-based tasksplitting algorithms into the Linux 2.6.34 version. We show that this kernel version is provided with the required features for implementing such scheduling algorithms. We show that the real behavior of the scheduling algorithm is very close to the theoretical. We run and discuss experiments on 4-core and 24-core machines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Image resizing is a normal feature incorporated into the Nuclear Medicine digital imaging. Upsampling is done by manufacturers to adequately fit more the acquired images on the display screen and it is applied when there is a need to increase - or decrease - the total number of pixels. This paper pretends to compare the “hqnx” and the “nxSaI” magnification algorithms with two interpolation algorithms – “nearest neighbor” and “bicubic interpolation” – in the image upsampling operations. Material and Methods: Three distinct Nuclear Medicine images were enlarged 2 and 4 times with the different digital image resizing algorithms (nearest neighbor, bicubic interpolation nxSaI and hqnx). To evaluate the pixel’s changes between the different output images, 3D whole image plot profiles and surface plots were used as an addition to the visual approach in the 4x upsampled images. Results: In the 2x enlarged images the visual differences were not so noteworthy. Although, it was clearly noticed that bicubic interpolation presented the best results. In the 4x enlarged images the differences were significant, with the bicubic interpolated images presenting the best results. Hqnx resized images presented better quality than 4xSaI and nearest neighbor interpolated images, however, its intense “halo effect” affects greatly the definition and boundaries of the image contents. Conclusion: The hqnx and the nxSaI algorithms were designed for images with clear edges and so its use in Nuclear Medicine images is obviously inadequate. Bicubic interpolation seems, from the algorithms studied, the most suitable and its each day wider applications seem to show it, being assumed as a multi-image type efficient algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiprocessors, particularly in the form of multicores, are becoming standard building blocks for executing reliable software. But their use for applications with hard real-time requirements is non-trivial. Well-known realtime scheduling algorithms in the uniprocessor context (Rate-Monotonic [1] or Earliest-Deadline-First [1]) do not perform well on multiprocessors. For this reason the scientific community in the area of real-time systems has produced new algorithms specifically for multiprocessors. In the meanwhile, a proposal [2] exists for extending the Ada language with new basic constructs which can be used for implementing new algorithms for real-time scheduling; the family of task splitting algorithms is one of them which was emphasized in the proposal [2]. Consequently, assessing whether existing task splitting multiprocessor scheduling algorithms can be implemented with these constructs is paramount. In this paper we present a list of state-of-art task-splitting multiprocessor scheduling algorithms and, for each of them, we present detailed Ada code that uses the new constructs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Consider the problem of scheduling a set of sporadically arriving implicit-deadline tasks to meet deadlines on a uniprocessor. Static-priority scheduling is considered using the slack-monotonic priority-assignment scheme. We prove that its utilization bound is 50%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The process of resources systems selection takes an important part in Distributed/Agile/Virtual Enterprises (D/A/V Es) integration. However, the resources systems selection is still a difficult matter to solve in a D/A/VE, as it is pointed out in this paper. Globally, we can say that the selection problem has been equated from different aspects, originating different kinds of models/algorithms to solve it. In order to assist the development of a web prototype tool (broker tool), intelligent and flexible, that integrates all the selection model activities and tools, and with the capacity to adequate to each D/A/V E project or instance (this is the major goal of our final project), we intend in this paper to show: a formulation of a kind of resources selection problem and the limitations of the algorithms proposed to solve it. We formulate a particular case of the problem as an integer programming, which is solved using simplex and branch and bound algorithms, and identify their performance limitations (in terms of processing time) based on simulation results. These limitations depend on the number of processing tasks and on the number of pre-selected resources per processing tasks, defining the domain of applicability of the algorithms for the problem studied. The limitations detected open the necessity of the application of other kind of algorithms (approximate solution algorithms) outside the domain of applicability founded for the algorithms simulated. However, for a broker tool it is very important the knowledge of algorithms limitations, in order to, based on problem features, develop and select the most suitable algorithm that guarantees a good performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study addresses the optimization of fractional algorithms for the discrete-time control of linear and non-linear systems. The paper starts by analyzing the fundamentals of fractional control systems and genetic algorithms. In a second phase the paper evaluates the problem in an optimization perspective. The results demonstrate the feasibility of the evolutionary strategy and the adaptability to distinct types of systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a work-in-progress version of a Dublin Core Application Profile (DCAP) developed to serve the Social and Solidarity Economy (SSE). Studies revealed that this community is interested in implementing both internal interoperability between their Web platforms to build a global SSE e-marketplace, and external interoperability among their Web platforms and external ones. The Dublin Core Application Profile for Social and Solidarity Economy (DCAP-SSE) serves this purpose. SSE organisations are submerged in the market economy but they have specificities not taken into account in this economy. The DCAP-SSE integrates terms from well-known metadata schemas, Resource Description Framework (RDF) vocabularies or ontologies, in order to enhance interoperability and take advantage of the benefits of the Linked Open Data ecosystem. It also integrates terms from the new essglobal RDF vocabulary which was created with the goal to respond to the SSE-specific needs. The DCAP-SSE also integrates five new Vocabulary Encoding Schemes to be used with DCAP-SSE properties. The DCAP development was based on a method for the development of application profiles (Me4MAP). We believe that this article has an educational value since it presents the idea that it is important to base DCAP developments on a method. This article shows the main results of applying such a method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using low cost portable devices that enable a single analytical step for screening environmental contaminants is today a demanding issue. This concept is here tried out by recycling screen-printed electrodes that were to be disposed of and by choosing as sensory element a low cost material offering specific response for an environmental contaminant. Microcystins (MCs) were used as target analyte, for being dangerous toxins produced by cyanobacteria released into water bodies. The sensory element was a plastic antibody designed by surface imprinting with carefully selected monomers to ensure a specific response. These were designed on the wall of carbon nanotubes, taking advantage of their exceptional electrical properties. The stereochemical ability of the sensory material to detect MCs was checked by preparing blank materials where the imprinting stage was made without the template molecule. The novel sensory material for MCs was introduced in a polymeric matrix and evaluated against potentiometric measurements. Nernstian response was observed from 7.24 × 10−10 to 1.28 × 10−9 M in buffer solution (10 mM HEPES, 150 mM NaCl, pH 6.6), with average slopes of −62 mVdecade−1 and detection capabilities below 1 nM. The blank materials were unable to provide a linear response against log(concentration), showing only a slight potential change towards more positive potentials with increasing concentrations (while that ofthe plastic antibodies moved to more negative values), with a maximum rate of +33 mVdecade−1. The sensors presented good selectivity towards sulphate, iron and ammonium ions, and also chloroform and tetrachloroethylene (TCE) and fast response (<20 s). This concept was successfully tested on the analysis of spiked environmental water samples. The sensors were further applied onto recycled chips, comprehending one site for the reference electrode and two sites for different selective membranes, in a biparametric approach for “in situ” analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

23rd International Conference on Real-Time Networks and Systems (RTNS 2015). 4 to 6, Nov, 2015, Main Track. Lille, France. Best Paper Award Nominee

Relevância:

100.00% 100.00%

Publicador:

Resumo:

6th International Conference on Climbing and Walking Robots and the Support Technologies for Mobile Machines, Catania, Italy, 17-19 September

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nos dias de hoje, os sistemas de tempo real crescem em importância e complexidade. Mediante a passagem do ambiente uniprocessador para multiprocessador, o trabalho realizado no primeiro não é completamente aplicável no segundo, dado que o nível de complexidade difere, principalmente devido à existência de múltiplos processadores no sistema. Cedo percebeu-se, que a complexidade do problema não cresce linearmente com a adição destes. Na verdade, esta complexidade apresenta-se como uma barreira ao avanço científico nesta área que, para já, se mantém desconhecida, e isto testemunha-se, essencialmente no caso de escalonamento de tarefas. A passagem para este novo ambiente, quer se trate de sistemas de tempo real ou não, promete gerar a oportunidade de realizar trabalho que no primeiro caso nunca seria possível, criando assim, novas garantias de desempenho, menos gastos monetários e menores consumos de energia. Este último fator, apresentou-se desde cedo, como, talvez, a maior barreira de desenvolvimento de novos processadores na área uniprocessador, dado que, à medida que novos eram lançados para o mercado, ao mesmo tempo que ofereciam maior performance, foram levando ao conhecimento de um limite de geração de calor que obrigou ao surgimento da área multiprocessador. No futuro, espera-se que o número de processadores num determinado chip venha a aumentar, e como é óbvio, novas técnicas de exploração das suas inerentes vantagens têm de ser desenvolvidas, e a área relacionada com os algoritmos de escalonamento não é exceção. Ao longo dos anos, diferentes categorias de algoritmos multiprocessador para dar resposta a este problema têm vindo a ser desenvolvidos, destacando-se principalmente estes: globais, particionados e semi-particionados. A perspectiva global, supõe a existência de uma fila global que é acessível por todos os processadores disponíveis. Este fato torna disponível a migração de tarefas, isto é, é possível parar a execução de uma tarefa e resumir a sua execução num processador distinto. Num dado instante, num grupo de tarefas, m, as tarefas de maior prioridade são selecionadas para execução. Este tipo promete limites de utilização altos, a custo elevado de preempções/migrações de tarefas. Em contraste, os algoritmos particionados, colocam as tarefas em partições, e estas, são atribuídas a um dos processadores disponíveis, isto é, para cada processador, é atribuída uma partição. Por essa razão, a migração de tarefas não é possível, acabando por fazer com que o limite de utilização não seja tão alto quando comparado com o caso anterior, mas o número de preempções de tarefas decresce significativamente. O esquema semi-particionado, é uma resposta de caráter hibrido entre os casos anteriores, pois existem tarefas que são particionadas, para serem executadas exclusivamente por um grupo de processadores, e outras que são atribuídas a apenas um processador. Com isto, resulta uma solução que é capaz de distribuir o trabalho a ser realizado de uma forma mais eficiente e balanceada. Infelizmente, para todos estes casos, existe uma discrepância entre a teoria e a prática, pois acaba-se por se assumir conceitos que não são aplicáveis na vida real. Para dar resposta a este problema, é necessário implementar estes algoritmos de escalonamento em sistemas operativos reais e averiguar a sua aplicabilidade, para caso isso não aconteça, as alterações necessárias sejam feitas, quer a nível teórico quer a nível prá