15 resultados para Computational Geometry and Object Modelling
em Instituto Politécnico do Porto, Portugal
Resumo:
Formula Student events gather engineering students, who compete, designing, building and racing single-seater cars. The team of ISEP is working on its first car that soon will take part in this competition. This work aims to analyze the current design’s chassis, focusing on suspension geometry and frame’s performance. After analyzing results of the tests planned suggestions, that can be taken into consideration during design process of next cars will be presented. As the car has not been tested yet this work can also be helpful to explain its performance on the track later.
Resumo:
Robotica 2012: 12th International Conference on Autonomous Robot Systems and Competitions April 11, 2012, Guimarães, Portugal
Resumo:
O documento em anexo encontra-se na versão post-print (versão corrigida pelo editor).
Resumo:
Power system planning, control and operation require an adequate use of existing resources as to increase system efficiency. The use of optimal solutions in power systems allows huge savings stressing the need of adequate optimization and control methods. These must be able to solve the envisaged optimization problems in time scales compatible with operational requirements. Power systems are complex, uncertain and changing environments that make the use of traditional optimization methodologies impracticable in most real situations. Computational intelligence methods present good characteristics to address this kind of problems and have already proved to be efficient for very diverse power system optimization problems. Evolutionary computation, fuzzy systems, swarm intelligence, artificial immune systems, neural networks, and hybrid approaches are presently seen as the most adequate methodologies to address several planning, control and operation problems in power systems. Future power systems, with intensive use of distributed generation and electricity market liberalization increase power systems complexity and bring huge challenges to the forefront of the power industry. Decentralized intelligence and decision making requires more effective optimization and control techniques techniques so that the involved players can make the most adequate use of existing resources in the new context. The application of computational intelligence methods to deal with several problems of future power systems is presented in this chapter. Four different applications are presented to illustrate the promises of computational intelligence, and illustrate their potentials.
Resumo:
Knowledge is central to the modern economy and society. Indeed, the knowledge society has transformed the concept of knowledge and is more and more aware of the need to overcome the lack of knowledge when has to make options or address its problems and dilemmas. One’s knowledge is less based on exact facts and more on hypotheses, perceptions or indications. Even when we use new computational artefacts and novel methodologies for problem solving, like the use of Group Decision Support Systems (GDSSs), the question of incomplete information is in most of the situations marginalized. On the other hand, common sense tells us that when a decision is made it is impossible to have a perception of all the information involved and the nature of its intrinsic quality. Therefore, something has to be made in terms of the information available and the process of its evaluation. It is under this framework that a Multi-valued Extended Logic Programming language will be used for knowledge representation and reasoning, leading to a model that embodies the Quality-of-Information (QoI) and its quantification, along the several stages of the decision-making process. In this way, it is possible to provide a measure of the value of the QoI that supports the decision itself. This model will be here presented in the context of a GDSS for VirtualECare, a system aimed at sustaining online healthcare services.
Resumo:
Introduction: Paper and thin layer chromatography methods are frequently used in Classic Nuclear Medicine for the determination of radiochemical purity (RCP) on radiopharmaceutical preparations. An aliquot of the radiopharmaceutical to be tested is spotted at the origin of a chromatographic strip (stationary phase), which in turn is placed in a chromatographic chamber in order to separate and quantify radiochemical species present in the radiopharmaceutical preparation. There are several methods for the RCP measurement, based on the use of equipment as dose calibrators, well scintillation counters, radiochromatografic scanners and gamma cameras. The purpose of this study was to compare these quantification methods for the determination of RCP. Material and Methods: 99mTc-Tetrofosmin and 99mTc-HDP are the radiopharmaceuticals chosen to serve as the basis for this study. For the determination of RCP of 99mTc-Tetrofosmin we used ITLC-SG (2.5 x 10 cm) and 2-butanone (99mTc-tetrofosmin Rf = 0.55, 99mTcO4- Rf = 1.0, other labeled impurities 99mTc-RH RF = 0.0). For the determination of RCP of 99mTc-HDP, Whatman 31ET and acetone was used (99mTc-HDP Rf = 0.0, 99mTcO4- Rf = 1.0, other labeled impurities RF = 0.0). After the development of the solvent front, the strips were allowed to dry and then imaged on the gamma camera (256x256 matrix; zoom 2; LEHR parallel-hole collimator; 5-minute image) and on the radiochromatogram scanner. Then, strips were cut in Rf 0.8 in the case of 99mTc-tetrofosmin and Rf 0.5 in the case of 99mTc-HDP. The resultant pieces were smashed in an assay tube (to minimize the effect of counting geometry) and counted in the dose calibrator and in the well scintillation counter (during 1 minute). The RCP was calculated using the formula: % 99mTc-Complex = [(99mTc-Complex) / (Total amount of 99mTc-labeled species)] x 100. Statistical analysis was done using the test of hypotheses for the difference between means in independent samples. Results:The gamma camera based method demonstrated higher operator-dependency (especially concerning the drawing of the ROIs) and the measures obtained using the dose calibrator are very sensitive to the amount of activity spotted in the chromatographic strip, so the use of a minimum of 3.7 MBq activity is essential to minimize quantification errors. Radiochromatographic scanner and well scintillation counter showed concordant results and demonstrated the higher level of precision. Conclusions: Radiochromatographic scanners and well scintillation counters based methods demonstrate to be the most accurate and less operator-dependant methods.
Resumo:
This paper reports on the analysis of tidal breathing patterns measured during noninvasive forced oscillation lung function tests in six individual groups. The three adult groups were healthy, with prediagnosed chronic obstructive pulmonary disease, and with prediagnosed kyphoscoliosis, respectively. The three children groups were healthy, with prediagnosed asthma, and with prediagnosed cystic fibrosis, respectively. The analysis is applied to the pressure–volume curves and the pseudophaseplane loop by means of the box-counting method, which gives a measure of the area within each loop. The objective was to verify if there exists a link between the area of the loops, power-law patterns, and alterations in the respiratory structure with disease. We obtained statistically significant variations between the data sets corresponding to the six groups of patients, showing also the existence of power-law patterns. Our findings support the idea that the respiratory system changes with disease in terms of airway geometry and tissue parameters, leading, in turn, to variations in the fractal dimension of the respiratory tree and its dynamics.
Resumo:
The use of composite laminates in complex structures has increased significantly. However, there are still some issues when considering their use, mainly related with machining, leading to some difficulties and lack of acceptance. In this work, a methodology to evaluate drill geometry and feed rate based on thrust force and delamination extension is presented.
Resumo:
In this study, the concentration probability distributions of 82 pharmaceutical compounds detected in the effluents of 179 European wastewater treatment plants were computed and inserted into a multimedia fate model. The comparative ecotoxicological impact of the direct emission of these compounds from wastewater treatment plants on freshwater ecosystems, based on a potentially affected fraction (PAF) of species approach, was assessed to rank compounds based on priority. As many pharmaceuticals are acids or bases, the multimedia fate model accounts for regressions to estimate pH-dependent fate parameters. An uncertainty analysis was performed by means of Monte Carlo analysis, which included the uncertainty of fate and ecotoxicity model input variables, as well as the spatial variability of landscape characteristics on the European continental scale. Several pharmaceutical compounds were identified as being of greatest concern, including 7 analgesics/anti-inflammatories, 3 β-blockers, 3 psychiatric drugs, and 1 each of 6 other therapeutic classes. The fate and impact modelling relied extensively on estimated data, given that most of these compounds have little or no experimental fate or ecotoxicity data available, as well as a limited reported occurrence in effluents. The contribution of estimated model input variables to the variance of freshwater ecotoxicity impact, as well as the lack of experimental abiotic degradation data for most compounds, helped in establishing priorities for further testing. Generally, the effluent concentration and the ecotoxicity effect factor were the model input variables with the most significant effect on the uncertainty of output results.
Resumo:
“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.
Resumo:
This paper reports on the analysis of tidal breathing patterns measured during noninvasive forced oscillation lung function tests in six individual groups. The three adult groups were healthy, with prediagnosed chronic obstructive pulmonary disease, and with prediagnosed kyphoscoliosis, respectively. The three children groups were healthy, with prediagnosed asthma, and with prediagnosed cystic fibrosis, respectively. The analysis is applied to the pressure-volume curves and the pseudophase-plane loop by means of the box-counting method, which gives a measure of the area within each loop. The objective was to verify if there exists a link between the area of the loops, power-law patterns, and alterations in the respiratory structure with disease. We obtained statistically significant variations between the data sets corresponding to the six groups of patients, showing also the existence of power-law patterns. Our findings support the idea that the respiratory system changes with disease in terms of airway geometry and tissue parameters, leading, in turn, to variations in the fractal dimension of the respiratory tree and its dynamics.
Resumo:
20th International Conference on Reliable Software Technologies - Ada-Europe 2015 (Ada-Europe 2015), 22 to 26, Jun, 2015, Madrid, Spain.
Resumo:
The underground scenarios are one of the most challenging environments for accurate and precise 3d mapping where hostile conditions like absence of Global Positioning Systems, extreme lighting variations and geometrically smooth surfaces may be expected. So far, the state-of-the-art methods in underground modelling remain restricted to environments in which pronounced geometric features are abundant. This limitation is a consequence of the scan matching algorithms used to solve the localization and registration problems. This paper contributes to the expansion of the modelling capabilities to structures characterized by uniform geometry and smooth surfaces, as is the case of road and train tunnels. To achieve that, we combine some state of the art techniques from mobile robotics, and propose a method for 6DOF platform positioning in such scenarios, that is latter used for the environment modelling. A visual monocular Simultaneous Localization and Mapping (MonoSLAM) approach based on the Extended Kalman Filter (EKF), complemented by the introduction of inertial measurements in the prediction step, allows our system to localize himself over long distances, using exclusively sensors carried on board a mobile platform. By feeding the Extended Kalman Filter with inertial data we were able to overcome the major problem related with MonoSLAM implementations, known as scale factor ambiguity. Despite extreme lighting variations, reliable visual features were extracted through the SIFT algorithm, and inserted directly in the EKF mechanism according to the Inverse Depth Parametrization. Through the 1-Point RANSAC (Random Sample Consensus) wrong frame-to-frame feature matches were rejected. The developed method was tested based on a dataset acquired inside a road tunnel and the navigation results compared with a ground truth obtained by post-processing a high grade Inertial Navigation System and L1/L2 RTK-GPS measurements acquired outside the tunnel. Results from the localization strategy are presented and analyzed.
Resumo:
As ligações adesivas são frequentemente utilizadas na fabricação de estruturas complexas que não poderiam ou não seriam tão fáceis de ser fabricadas numa só peça, a fim de proporcionar uma união estrutural que, teoricamente, deve ser pelo menos tão resistente como o material de base. As juntas adesivas têm vindo a substituir métodos como a soldadura, e ligações parafusadas e rebitadas, devido à facilidade de fabricação, menor custo, facilidade em unir materiais diferentes, melhor resistência, entre outras características. Os materiais compósitos reforçados com fibra de carbono são amplamente utilizados em muitas indústrias, tais como de construção de barcos, automóvel e aeronáutica, sendo usados em estruturas que requerem elevada resistência e rigidez específicas, o que reduz o peso dos componentes, mantendo a resistência e rigidez necessárias para suportar as diversas cargas aplicadas. Embora estes métodos de fabricação reduzam ao máximo as ligações através de técnicas de fabrico avançadas, estas ainda são necessárias devido ao tamanho dos componentes, limitações de projecto tecnológicas e logísticas. Em muitas estruturas, a combinação de compósitos com metais tais como alumínio ou titânio traz vantagens de projecto. Este trabalho tem como objectivo estudar, experimentalmente e por modelos de dano coesivo (MDC), juntas adesivas em L entre componentes de alumínio e compósito de carbono epóxido quando solicitados a forças de arrancamento, considerando diferentes configurações de junta e adesivos de ductilidade distinta. Os parâmetros geométricos abordados são a espessura do aderente de alumínio (tP2) e comprimento de sobreposição (LO). A análise numérica permitiu o estudo da distribuição das tensões, evolução do dano, resistência e modos de rotura. Os testes experimentais validam os resultados numéricos e fornecem mecanismos de projecto para juntas em L. Foi mostrado que a geometria do aderente em L (alumínio) e o tipo de adesivo têm uma influência directa na resistência de junta.
Resumo:
With the need to find an alternative way to mechanical and welding joints, and at the same time to overcome some limitations linked to these traditional techniques, adhesive bonds can be used. Adhesive bonding is a permanent joining process that uses an adhesive to bond the components of a structure. Composite materials reinforced with fibres are becoming increasingly popular in many applications as a result of a number of competitive advantages. In the manufacture of composite structures, although the fabrication techniques reduce to the minimum by means of advanced manufacturing techniques, the use of connections is still required due to the typical size limitations and design, technological and logistical aspects. Moreover, it is known that in many high performance structures, unions between composite materials with other light metals such as aluminium are required, for purposes of structural optimization. This work deals with the experimental and numerical study of single lap joints (SLJ), bonded with a brittle (Nagase Chemtex Denatite XNRH6823) and a ductile adhesive (Nagase Chemtex Denatite XNR6852). These are applied to hybrid joints between aluminium (AL6082-T651) and carbon fibre reinforced plastic (CFRP; Texipreg HS 160 RM) adherends in joints with different overlap lengths (LO) under a tensile loading. The Finite Element (FE) Method is used to perform detailed stress and damage analyses allowing to explain the joints’ behaviour and the use of cohesive zone models (CZM) enables predicting the joint strength and creating a simple and rapid design methodology. The use of numerical methods to simulate the behaviour of the joints can lead to savings of time and resources by optimizing the geometry and material parameters of the joints. The joints’ strength and failure modes were highly dependent on the adhesive, and this behaviour was successfully modelled numerically. Using a brittle adhesive resulted in a negligible maximum load (Pm) improvement with LO. The joints bonded with the ductile adhesive showed a nearly linear improvement of Pm with LO.