919 resultados para Image-based mesh generation
Resumo:
In the recent years, autonomous aerial vehicles gained large popularity in a variety of applications in the field of automation. To accomplish various and challenging tasks the capability of generating trajectories has assumed a key role. As higher performances are sought, traditional, flatness-based trajectory generation schemes present their limitations. In these approaches the highly nonlinear dynamics of the quadrotor is, indeed, neglected. Therefore, strategies based on optimal control principles turn out to be beneficial, since in the trajectory generation process they allow the control unit to best exploit the actual dynamics, and enable the drone to perform quite aggressive maneuvers. This dissertation is then concerned with the development of an optimal control technique to generate trajectories for autonomous drones. The algorithm adopted to this end is a second-order iterative method working directly in continuous-time, which, under proper initialization, guarantees quadratic convergence to a locally optimal trajectory. At each iteration a quadratic approximation of the cost functional is minimized and a decreasing direction is then obtained as a linear-affine control law, after solving a differential Riccati equation. The algorithm has been implemented and its effectiveness has been tested on the vectored-thrust dynamical model of a quadrotor in a realistic simulative setup.
Resumo:
This paper proposes a physical non-linear formulation to deal with steel fiber reinforced concrete by the finite element method. The proposed formulation allows the consideration of short or long fibers placed arbitrarily inside a continuum domain (matrix). The most important feature of the formulation is that no additional degree of freedom is introduced in the pre-existent finite element numerical system to consider any distribution or quantity of fiber inclusions. In other words, the size of the system of equations used to solve a non-reinforced medium is the same as the one used to solve the reinforced counterpart. Another important characteristic of the formulation is the reduced work required by the user to introduce reinforcements, avoiding ""rebar"" elements, node by node geometrical definitions or even complex mesh generation. Bounded connection between long fibers and continuum is considered, for short fibers a simplified approach is proposed to consider splitting. Non-associative plasticity is adopted for the continuum and one dimensional plasticity is adopted to model fibers. Examples are presented in order to show the capabilities of the formulation.
Resumo:
This communication proposes a simple way to introduce fibers into finite element modelling. This is a promising formulation to deal with fiber-reinforced composites by the finite element method (FEM), as it allows the consideration of short or long fibers placed arbitrarily inside a continuum domain (matrix). The most important feature of the formulation is that no additional degree of freedom is introduced into the pre-existent finite element numerical system to consider any distribution of fiber inclusions. In other words, the size of the system of equations used to solve a non-reinforced medium is the same as the one used to solve the reinforced counterpart. Another important characteristic is the reduced work required by the user to introduce fibers, avoiding `rebar` elements, node-by-node geometrical definitions or even complex mesh generation. An additional characteristic of the technique is the possibility of representing unbounded stresses at the end of fibers using a finite number of degrees of freedom. Further studies are required for non-linear applications in which localization may occur. Along the text the linear formulation is presented and the bounded connection between fibers and continuum is considered. Four examples are presented, including non-linear analysis, to validate and show the capabilities of the formulation. Copyright (c) 2007 John Wiley & Sons, Ltd.
Resumo:
Given the dynamic nature of cardiac function, correct temporal alignment of pre-operative models and intraoperative images is crucial for augmented reality in cardiac image-guided interventions. As such, the current study focuses on the development of an image-based strategy for temporal alignment of multimodal cardiac imaging sequences, such as cine Magnetic Resonance Imaging (MRI) or 3D Ultrasound (US). First, we derive a robust, modality-independent signal from the image sequences, estimated by computing the normalized crosscorrelation between each frame in the temporal sequence and the end-diastolic frame. This signal is a resembler for the left-ventricle (LV) volume curve over time, whose variation indicates di erent temporal landmarks of the cardiac cycle. We then perform the temporal alignment of these surrogate signals derived from MRI and US sequences of the same patient through Dynamic Time Warping (DTW), allowing to synchronize both sequences. The proposed framework was evaluated in 98 patients, which have undergone both 3D+t MRI and US scans. The end-systolic frame could be accurately estimated as the minimum of the image-derived surrogate signal, presenting a relative error of 1:6 1:9% and 4:0 4:2% for the MRI and US sequences, respectively, thus supporting its association with key temporal instants of the cardiac cycle. The use of DTW reduces the desynchronization of the cardiac events in MRI and US sequences, allowing to temporally align multimodal cardiac imaging sequences. Overall, a generic, fast and accurate method for temporal synchronization of MRI and US sequences of the same patient was introduced. This approach could be straightforwardly used for the correct temporal alignment of pre-operative MRI information and intra-operative US images.
Resumo:
O Tumor de Wilms (TW) é o tumor renal mais comum da infância, com uma incidência de 1 em ~10000 crianças. Esta patologia é de etiologia genética complexa e diversificada. No entanto, cerca de um terço dos doentes apresenta mutações somáticas associadas aos genes WT1, CTNNB1, TP53 e/ou AMER1. Assim, foi desenvolvido um painel de amplicões destes 4 genes para a identificação de mutações num grupo de doentes portugueses com TW, através de uma metodologia baseada na sequenciação de nova geração. As bibliotecas de DNA foram preparadas a partir de amostras de sangue periférico e tumor de 36 doentes com TW e sequenciadas no MiSeq. Foram identificadas alterações somáticas em 7 dos 36 (19,4%) doentes estudados. Conclui-se que a sequenciação de um painel de genes é um método rápido para a deteção de mutações somáticas quando desenhado com cuidado de forma a serem evitados problemas de perda de cobertura.
Resumo:
Renewable based power generation has significantly increased over the last years. However, this process has evolved separately from electricity markets, leading to an inadequacy of the present market models to cope with huge quantities of renewable energy resources, and to take full advantage of the presently existing and the increasing envisaged renewable based and distributed energy resources. This paper proposes the modelling of electricity markets at several levels (continental, regional and micro), taking into account the specific characteristics of the players and resources involved in each level and ensuring that the proposed models accommodate adequate business models able to support the contribution of all the resources in the system, from the largest to the smaller ones. The proposed market models are integrated in MASCEM (Multi- Agent Simulator of Competitive Electricity Markets), using the multi agent approach advantages for overcoming the current inadequacy and significant limitations of the presently existing electricity market simulators to deal with the complex electricity market models that must be adopted.
Resumo:
This paper presents ELECON - Electricity Consumption Analysis to Promote Energy Efficiency Considering Demand Response and Non-technical Losses, an international research project that involves European and Brazilian partners. ELECON focuses on energy efficiency increasing through consumer´s active participation which is a key area for Europe and Brazil cooperation. The project aims at significantly contributing towards the successful implementation of smart grids, focussing on the use of new methods that allow the efficient use of distributed energy resources, namely distributed generation, storage and demand response. ELECON puts together researchers from seven European and Brazilian partners, with consolidated research background and evidencing complementary competences. ELECON involves institutions of 3 European countries (Portugal, Germany, and France) and 4 Brazilian institutions. The complementary background and experience of the European and Brazilian partners is of main relevance to ensure the capacities required to achieve the proposed goals. In fact, the European Union (EU) and Brazil have very different resources and approaches in what concerns this area. Having huge hydro and fossil resources, Brazil has not been putting emphasis on distributed renewable based electricity generation. On the contrary, EU has been doing huge investments in this area, taking into account environmental concerns and also the economic EU external dependence dictated by huge requirements of energy related products imports. Sharing these different backgrounds allows the project team to propose new methodologies able to efficiently address the new challenges of smart grids.
Resumo:
In recent years the use of several new resources in power systems, such as distributed generation, demand response and more recently electric vehicles, has significantly increased. Power systems aim at lowering operational costs, requiring an adequate energy resources management. In this context, load consumption management plays an important role, being necessary to use optimization strategies to adjust the consumption to the supply profile. These optimization strategies can be integrated in demand response programs. The control of the energy consumption of an intelligent house has the objective of optimizing the load consumption. This paper presents a genetic algorithm approach to manage the consumption of a residential house making use of a SCADA system developed by the authors. Consumption management is done reducing or curtailing loads to keep the power consumption in, or below, a specified energy consumption limit. This limit is determined according to the consumer strategy and taking into account the renewable based micro generation, energy price, supplier solicitations, and consumers’ preferences. The proposed approach is compared with a mixed integer non-linear approach.
Resumo:
Smart grids are envisaged as infrastructures able to accommodate all centralized and distributed energy resources (DER), including intensive use of renewable and distributed generation (DG), storage, demand response (DR), and also electric vehicles (EV), from which plug-in vehicles, i.e. gridable vehicles, are especially relevant. Moreover, smart grids must accommodate a large number of diverse types or players in the context of a competitive business environment. Smart grids should also provide the required means to efficiently manage all these resources what is especially important in order to make the better possible use of renewable based power generation, namely to minimize wind curtailment. An integrated approach, considering all the available energy resources, including demand response and storage, is crucial to attain these goals. This paper proposes a methodology for energy resource management that considers several Virtual Power Players (VPPs) managing a network with high penetration of distributed generation, demand response, storage units and network reconfiguration. The resources are controlled through a flexible SCADA (Supervisory Control And Data Acquisition) system that can be accessed by the evolved entities (VPPs) under contracted use conditions. A case study evidences the advantages of the proposed methodology to support a Virtual Power Player (VPP) managing the energy resources that it can access in an incident situation.
Resumo:
This paper proposes a wind speed forecasting model that contributes to the development and implementation of adequate methodologies for Energy Resource Man-agement in a distribution power network, with intensive use of wind based power generation. The proposed fore-casting methodology aims to support the operation in the scope of the intraday resources scheduling model, name-ly with a time horizon of 10 minutes. A case study using a real database from the meteoro-logical station installed in the GECAD renewable energy lab was used. A new wind speed forecasting model has been implemented and it estimated accuracy was evalu-ated and compared with a previous developed forecast-ing model. Using as input attributes the information of the wind speed concerning the previous 3 hours enables to obtain results with high accuracy for the wind short-term forecasting.
Resumo:
Energy systems worldwide are complex and challenging environments. Multi-agent based simulation platforms are increasing at a high rate, as they show to be a good option to study many issues related to these systems, as well as the involved players at act in this domain. In this scope the authors’ research group has developed a multi-agent system: MASCEM (Multi- Agent System for Competitive Electricity Markets), which performs realistic simulations of the electricity markets. MASCEM is integrated with ALBidS (Adaptive Learning Strategic Bidding System) that works as a decision support system for market players. The ALBidS system allows MASCEM market negotiating players to take the best possible advantages from each market context. However, it is still necessary to adequately optimize the players’ portfolio investment. For this purpose, this paper proposes a market portfolio optimization method, based on particle swarm optimization, which provides the best investment profile for a market player, considering different market opportunities (bilateral negotiation, market sessions, and operation in different markets) and the negotiation context such as the peak and off-peak periods of the day, the type of day (business day, weekend, holiday, etc.) and most important, the renewable based distributed generation forecast. The proposed approach is tested and validated using real electricity markets data from the Iberian operator – MIBEL.
Resumo:
A intervenção humana no manuseamento de veículos submarinos operados remotamente (ROVs) é um requisito necessário para garantir o sucesso da missão e a integridade do equipamento. Contudo, a sua teleoperação não é fácil, pelo que a condução assistida destes veículos torna-se relevante. Esta dissertação propõe uma solução para este problema para ROVs de 3DOF (surge, heave e yaw). São propostas duas abordagens distintas – numa primeira propõe-se um sistema de controlo Image Based Visual Servoing (IBVS) tendo em vista a utilização exclusiva de uma câmara (sensor existente neste tipo de sistemas) por forma a melhorar significativamente a teleoperação de um pequeno ROV; na segunda, propõe-se um sistema de controlo cinemático para o plano horizontal do veículo e um algoritmo de uma manobra capaz de dotar o ROV de movimento lateral através de uma trajectória dente-de-serra. Demonstrou-se em cenários de operação real que o sistema proposto na primeira abordagem permite ao operador de um ROV com 3DOF executar tarefas de alguma complexidade (estabilização) apenas através de comandos de alto nível, melhorando assim drasticamente a teleoperação e qualidade de inspecção do veículo em questão. Foi também desenvolvido um simulador do ROV em MATLAB para validação e avaliação das manobras, onde o sistema proposto na segunda abordagem foi validado com sucesso.
Resumo:
High-content analysis has revolutionized cancer drug discovery by identifying substances that alter the phenotype of a cell, which prevents tumor growth and metastasis. The high-resolution biofluorescence images from assays allow precise quantitative measures enabling the distinction of small molecules of a host cell from a tumor. In this work, we are particularly interested in the application of deep neural networks (DNNs), a cutting-edge machine learning method, to the classification of compounds in chemical mechanisms of action (MOAs). Compound classification has been performed using image-based profiling methods sometimes combined with feature reduction methods such as principal component analysis or factor analysis. In this article, we map the input features of each cell to a particular MOA class without using any treatment-level profiles or feature reduction methods. To the best of our knowledge, this is the first application of DNN in this domain, leveraging single-cell information. Furthermore, we use deep transfer learning (DTL) to alleviate the intensive and computational demanding effort of searching the huge parameter's space of a DNN. Results show that using this approach, we obtain a 30% speedup and a 2% accuracy improvement.
Resumo:
This report gives a comprehensive and up-to-date review of Alzheimer's disease biomarkers. Recent years have seen significant advances in this field. Whilst considerable effort has focused on A�_ and tau related markers, a substantial number of other molecules have been identified, that may offer new opportunities.This Report : Identifies 60 candidate Alzheimer's (AD) biomarkers and their associated studies. Of these, 49 are single species or single parameters, 7 are combinations or panels and 4 involve the measurement of two species or parameters or their ratios. These include proteins (n=34), genes (n=11), image-based parameters (n=7), small molecules (n=3), proteins + genes (n=2) and others (n=3). Of these, 30 (50%) relate to species identified in CSF and 19 (32%) were found in the blood. These candidate may be classified on the basis of their diagnostic utility, namely those which i) may allow AD to be detected when the disease has developed (48 of 75†= 64%), ii) may allow early detection of AD (18 of 75† = 24%) and iii) may allow AD to be predicted before the disease has begun to develop (9 of 75†= 12%). † Note: Of these, 11 were linked to two or more of these capabilities (e.g. allowed both early-stage detection as well as diagnosis after the disease has developed).Biomarkers: AD biomarkers identified in this report show significant diversity, however of the 60 described, 18 (30%) are associated with amyloid beta (A�_) and 9 (15%) relate to Tau. The remainder of the biomarkers (just over half) fall into a number of different groups. Of these, some are associated with other hypotheses on the pathogenesis of AD however the vast majority are individually unique and not obviously linked with other markers. Analysis and discussion presented in this report includes summaries of the studies and clinical trials that have lead to the identification of these markers. Where it has been calculated, diagnostic sensitivity, specificity and the capacity of these markers to differentiate patients with suspected AD from healthy controls and individuals believed to be suffering from other neurodegenerative conditions, have been indicated. These findings are discussed in relation to existing hypotheses on the pathogenesis of the AD and the current drug development pipeline. Many uncertainties remain in relation to the pathogenesis of AD, in diagnosing and treating the disease and many of the studies carried out to identify disease markers are at an early stage and will require confirmation through larger and longer investigations. Nevertheless, significant advances in the identification of AD biomarkers have now been made. Moreover, whilst much of the research on AD biomarkers has focused on amyloid and tau related species, it is evident that a substantial number of other species may provide important opportunities.Purpose of Report: To provide a comprehensive review of important and recently discovered candidate biomarkers of AD, in particular those with potential to reliably detect the disease or with utility in clinical development, drug repurposing, in studies of the pathogenesis and in monitoring drug response and the course of the disease. Other key goals were to identify markers that support current pipeline developments, indicate new potential drug targets or which advance understanding of the pathogenesis of this disease.Drug Repurposing: Studies of the pathogenesis of AD have identified aberrant changes in a number of other disease areas including inflammation, diabetes, oxidative stress, lipid metabolism and others. These findings have prompted studies to evaluate some existing approved drugs to treat AD. This report identifies studies of 9 established drug classes currently being investigated for potential repurposing.Alzheimer’s Disease: In 2005, the global prevalence of dementia was estimated at 25 million, with more than 4 million new cases occurring each year. It is also calculated that the number of people affected will double every 20 years, to 80 million by 2040, if a cure is not found. More than 50% of dementia cases are due to AD. Today, approximately 5 million individuals in the US suffer from AD, representing one in eight people over the age of 65. Direct and indirect costs of AD and other forms of dementia in the US are around $150 billion annually. Worldwide, costs for dementia care are estimated at $315 billion annually. Despite significant research into this debilitating and ultimately fatal disease, advances in the development of diagnostic tests for AD and moreover, effective treatments, remain elusive.Background: Alzheimer's disease is the most common cause of dementia, yet its clinical diagnosis remains uncertain until an eventual post-mortem histopathology examination is carried out. Currently, therapy for patients with Alzheimer disease only treats the symptoms; however, it is anticipated that new disease-modifying drugs will soon become available. The urgency for new and effective treatments for AD is matched by the need for new tests to detect and diagnose the condition. Uncertainties in the diagnosis of AD mean that the disease is often undiagnosed and under treated. Moreover, it is clear that clinical confirmation of AD, using cognitive tests, can only be made after substantial neuronal cell loss has occurred; a process that may have taken place over many years. Poor response to current therapies may therefore, in part, reflect the fact that such treatments are generally commenced only after neuronal damage has occurred. The absence of tests to detect or diagnose presymptomatic AD also means that there is no standard that can be applied to validate experimental findings (e.g. in drug discovery) without performing lengthy studies, and eventual confirmation by autopsy.These limitations are focusing considerable effort on the identification of biomarkers that advance understanding of the pathogenesis of AD and how the disease can be diagnosed in its early stages and treated. It is hoped that developments in these areas will help physicians to detect AD and guide therapy before the first signs of neuronal damage appears. The last 5-10 years have seen substantial research into the pathogenesis of AD and this has lead to the identification of a substantial number of AD biomarkers, which offer important insights into this disease. This report brings together the latest advances in the identification of AD biomarkers and analyses the opportunities they offer in drug R&D and diagnostics.��
Resumo:
PURPOSE: Respiratory motion correction remains a challenge in coronary magnetic resonance imaging (MRI) and current techniques, such as navigator gating, suffer from sub-optimal scan efficiency and ease-of-use. To overcome these limitations, an image-based self-navigation technique is proposed that uses "sub-images" and compressed sensing (CS) to obtain translational motion correction in 2D. The method was preliminarily implemented as a 2D technique and tested for feasibility for targeted coronary imaging. METHODS: During a 2D segmented radial k-space data acquisition, heavily undersampled sub-images were reconstructed from the readouts collected during each cardiac cycle. These sub-images may then be used for respiratory self-navigation. Alternatively, a CS reconstruction may be used to create these sub-images, so as to partially compensate for the heavy undersampling. Both approaches were quantitatively assessed using simulations and in vivo studies, and the resulting self-navigation strategies were then compared to conventional navigator gating. RESULTS: Sub-images reconstructed using CS showed a lower artifact level than sub-images reconstructed without CS. As a result, the final image quality was significantly better when using CS-assisted self-navigation as opposed to the non-CS approach. Moreover, while both self-navigation techniques led to a 69% scan time reduction (as compared to navigator gating), there was no significant difference in image quality between the CS-assisted self-navigation technique and conventional navigator gating, despite the significant decrease in scan time. CONCLUSIONS: CS-assisted self-navigation using 2D translational motion correction demonstrated feasibility of producing coronary MRA data with image quality comparable to that obtained with conventional navigator gating, and does so without the use of additional acquisitions or motion modeling, while still allowing for 100% scan efficiency and an improved ease-of-use. In conclusion, compressed sensing may become a critical adjunct for 2D translational motion correction in free-breathing cardiac imaging with high spatial resolution. An expansion to modern 3D approaches is now warranted.