978 resultados para Prediction algorithms
Resumo:
The theory of fractional calculus goes back to the beginning of thr throry of differential calculus but its inherent complexity postponed the applications of the associated concepts. In the last decade the progress in the areas of chaos and fractals revealed subtle relationships with the fractional calculus leading to an increasing interest in the development of the new paradigm. In the area of automaticcontrol preliminary work has already been carried out but the proposed algorithms are restricted to the frequency domain. The paper discusses the design of fractional-order discrete-time controllers. The algorithms studied adopt the time domein, which makes them suited for z-transform analusis and discrete-time implementation. The performance of discrete-time fractional-order controllers with linear and non-linear systems is also investigated.
Resumo:
Robotica 2012: 12th International Conference on Autonomous Robot Systems and Competitions April 11, 2012, Guimarães, Portugal
Resumo:
The underground scenarios are one of the most challenging environments for accurate and precise 3d mapping where hostile conditions like absence of Global Positioning Systems, extreme lighting variations and geometrically smooth surfaces may be expected. So far, the state-of-the-art methods in underground modelling remain restricted to environments in which pronounced geometric features are abundant. This limitation is a consequence of the scan matching algorithms used to solve the localization and registration problems. This paper contributes to the expansion of the modelling capabilities to structures characterized by uniform geometry and smooth surfaces, as is the case of road and train tunnels. To achieve that, we combine some state of the art techniques from mobile robotics, and propose a method for 6DOF platform positioning in such scenarios, that is latter used for the environment modelling. A visual monocular Simultaneous Localization and Mapping (MonoSLAM) approach based on the Extended Kalman Filter (EKF), complemented by the introduction of inertial measurements in the prediction step, allows our system to localize himself over long distances, using exclusively sensors carried on board a mobile platform. By feeding the Extended Kalman Filter with inertial data we were able to overcome the major problem related with MonoSLAM implementations, known as scale factor ambiguity. Despite extreme lighting variations, reliable visual features were extracted through the SIFT algorithm, and inserted directly in the EKF mechanism according to the Inverse Depth Parametrization. Through the 1-Point RANSAC (Random Sample Consensus) wrong frame-to-frame feature matches were rejected. The developed method was tested based on a dataset acquired inside a road tunnel and the navigation results compared with a ground truth obtained by post-processing a high grade Inertial Navigation System and L1/L2 RTK-GPS measurements acquired outside the tunnel. Results from the localization strategy are presented and analyzed.
Resumo:
The integrity of multi-component structures is usually determined by their unions. Adhesive-bonding is often used over traditional methods because of the reduction of stress concentrations, reduced weight penalty, and easy manufacturing. Commercial adhesives range from strong and brittle (e.g., Araldite® AV138) to less strong and ductile (e.g., Araldite® 2015). A new family of polyurethane adhesives combines high strength and ductility (e.g., Sikaforce® 7888). In this work, the performance of the three above-mentioned adhesives was tested in single lap joints with varying values of overlap length (LO). The experimental work carried out is accompanied by a detailed numerical analysis by finite elements, either based on cohesive zone models (CZM) or the extended finite element method (XFEM). This procedure enabled detailing the performance of these predictive techniques applied to bonded joints. Moreover, it was possible to evaluate which family of adhesives is more suited for each joint geometry. CZM revealed to be highly accurate, except for largely ductile adhesives, although this could be circumvented with a different cohesive law. XFEM is not the most suited technique for mixed-mode damage growth, but a rough prediction was achieved.
Resumo:
This paper addresses the challenging task of computing multiple roots of a system of nonlinear equations. A repulsion algorithm that invokes the Nelder-Mead (N-M) local search method and uses a penalty-type merit function based on the error function, known as 'erf', is presented. In the N-M algorithm context, different strategies are proposed to enhance the quality of the solutions and improve the overall efficiency. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm. The main goal of this paper is to use a two-level factorial design of experiments to analyze the statistical significance of the observed differences in selected performance criteria produced when testing different strategies in the N-M based repulsion algorithm.
Resumo:
OBJECTIVE: To empirically test, based on a large multicenter, multinational database, whether a modified PIRO (predisposition, insult, response, and organ dysfunction) concept could be applied to predict mortality in patients with infection and sepsis. DESIGN: Substudy of a multicenter multinational cohort study (SAPS 3). PATIENTS: A total of 2,628 patients with signs of infection or sepsis who stayed in the ICU for >48 h. Three boxes of variables were defined, according to the PIRO concept. Box 1 (Predisposition) contained information about the patient's condition before ICU admission. Box 2 (Injury) contained information about the infection at ICU admission. Box 3 (Response) was defined as the response to the infection, expressed as a Sequential Organ Failure Assessment score after 48 h. INTERVENTIONS: None. MAIN MEASUREMENTS AND RESULTS: Most of the infections were community acquired (59.6%); 32.5% were hospital acquired. The median age of the patients was 65 (50-75) years, and 41.1% were female. About 22% (n=576) of the patients presented with infection only, 36.3% (n=953) with signs of sepsis, 23.6% (n=619) with severe sepsis, and 18.3% (n=480) with septic shock. Hospital mortality was 40.6% overall, greater in those with septic shock (52.5%) than in those with infection (34.7%). Several factors related to predisposition, infection and response were associated with hospital mortality. CONCLUSION: The proposed three-level system, by using objectively defined criteria for risk of mortality in sepsis, could be used by physicians to stratify patients at ICU admission or shortly thereafter, contributing to a better selection of management according to the risk of death.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica
Resumo:
Does carotid intima-media thickness (cIMT), a surrogate marker of cardiovascular events, have predictive incremental value over established risk factors for stable coronary artery disease (CAD)? Prospective study of 300 patients, with suspected stable CAD, admitted for an elective coronary angiography and carotid ultrasound. The CAD patients had a higher cIMT, which showed a modest predictive accuracy for CAD (area under the receiver-operating characteristic curve 0.638, 95% confidence interval 0.576-0.701, P < .001). The cIMT was an independent predictor of CAD, together with age, gender, and diabetes. C-statistic for CAD prediction by traditional risk factors was not significantly different from a model that included cIMT, carotid plaque presence, or both. However, in women, it was significantly increased by the addition of cIMT or carotid plaque presence. Although cIMT cannot be used as a sole indicator of CAD, it should be considered in the panel of investigations that is requested, particularly in women who are candidates for coronary angiography.
Resumo:
Nos dias de hoje, os sistemas de tempo real crescem em importância e complexidade. Mediante a passagem do ambiente uniprocessador para multiprocessador, o trabalho realizado no primeiro não é completamente aplicável no segundo, dado que o nível de complexidade difere, principalmente devido à existência de múltiplos processadores no sistema. Cedo percebeu-se, que a complexidade do problema não cresce linearmente com a adição destes. Na verdade, esta complexidade apresenta-se como uma barreira ao avanço científico nesta área que, para já, se mantém desconhecida, e isto testemunha-se, essencialmente no caso de escalonamento de tarefas. A passagem para este novo ambiente, quer se trate de sistemas de tempo real ou não, promete gerar a oportunidade de realizar trabalho que no primeiro caso nunca seria possível, criando assim, novas garantias de desempenho, menos gastos monetários e menores consumos de energia. Este último fator, apresentou-se desde cedo, como, talvez, a maior barreira de desenvolvimento de novos processadores na área uniprocessador, dado que, à medida que novos eram lançados para o mercado, ao mesmo tempo que ofereciam maior performance, foram levando ao conhecimento de um limite de geração de calor que obrigou ao surgimento da área multiprocessador. No futuro, espera-se que o número de processadores num determinado chip venha a aumentar, e como é óbvio, novas técnicas de exploração das suas inerentes vantagens têm de ser desenvolvidas, e a área relacionada com os algoritmos de escalonamento não é exceção. Ao longo dos anos, diferentes categorias de algoritmos multiprocessador para dar resposta a este problema têm vindo a ser desenvolvidos, destacando-se principalmente estes: globais, particionados e semi-particionados. A perspectiva global, supõe a existência de uma fila global que é acessível por todos os processadores disponíveis. Este fato torna disponível a migração de tarefas, isto é, é possível parar a execução de uma tarefa e resumir a sua execução num processador distinto. Num dado instante, num grupo de tarefas, m, as tarefas de maior prioridade são selecionadas para execução. Este tipo promete limites de utilização altos, a custo elevado de preempções/migrações de tarefas. Em contraste, os algoritmos particionados, colocam as tarefas em partições, e estas, são atribuídas a um dos processadores disponíveis, isto é, para cada processador, é atribuída uma partição. Por essa razão, a migração de tarefas não é possível, acabando por fazer com que o limite de utilização não seja tão alto quando comparado com o caso anterior, mas o número de preempções de tarefas decresce significativamente. O esquema semi-particionado, é uma resposta de caráter hibrido entre os casos anteriores, pois existem tarefas que são particionadas, para serem executadas exclusivamente por um grupo de processadores, e outras que são atribuídas a apenas um processador. Com isto, resulta uma solução que é capaz de distribuir o trabalho a ser realizado de uma forma mais eficiente e balanceada. Infelizmente, para todos estes casos, existe uma discrepância entre a teoria e a prática, pois acaba-se por se assumir conceitos que não são aplicáveis na vida real. Para dar resposta a este problema, é necessário implementar estes algoritmos de escalonamento em sistemas operativos reais e averiguar a sua aplicabilidade, para caso isso não aconteça, as alterações necessárias sejam feitas, quer a nível teórico quer a nível prá
Resumo:
Face à estagnação da tecnologia uniprocessador registada na passada década, aos principais fabricantes de microprocessadores encontraram na tecnologia multi-core a resposta `as crescentes necessidades de processamento do mercado. Durante anos, os desenvolvedores de software viram as suas aplicações acompanhar os ganhos de performance conferidos por cada nova geração de processadores sequenciais, mas `a medida que a capacidade de processamento escala em função do número de processadores, a computação sequencial tem de ser decomposta em várias partes concorrentes que possam executar em paralelo, para que possam utilizar as unidades de processamento adicionais e completar mais rapidamente. A programação paralela implica um paradigma completamente distinto da programação sequencial. Ao contrário dos computadores sequenciais tipificados no modelo de Von Neumann, a heterogeneidade de arquiteturas paralelas requer modelos de programação paralela que abstraiam os programadores dos detalhes da arquitectura e simplifiquem o desenvolvimento de aplicações concorrentes. Os modelos de programação paralela mais populares incitam os programadores a identificar instruções concorrentes na sua lógica de programação, e a especificá-las sob a forma de tarefas que possam ser atribuídas a processadores distintos para executarem em simultâneo. Estas tarefas são tipicamente lançadas durante a execução, e atribuídas aos processadores pelo motor de execução subjacente. Como os requisitos de processamento costumam ser variáveis, e não são conhecidos a priori, o mapeamento de tarefas para processadores tem de ser determinado dinamicamente, em resposta a alterações imprevisíveis dos requisitos de execução. `A medida que o volume da computação cresce, torna-se cada vez menos viável garantir as suas restrições temporais em plataformas uniprocessador. Enquanto os sistemas de tempo real se começam a adaptar ao paradigma de computação paralela, há uma crescente aposta em integrar execuções de tempo real com aplicações interativas no mesmo hardware, num mundo em que a tecnologia se torna cada vez mais pequena, leve, ubíqua, e portável. Esta integração requer soluções de escalonamento que simultaneamente garantam os requisitos temporais das tarefas de tempo real e mantenham um nível aceitável de QoS para as restantes execuções. Para tal, torna-se imperativo que as aplicações de tempo real paralelizem, de forma a minimizar os seus tempos de resposta e maximizar a utilização dos recursos de processamento. Isto introduz uma nova dimensão ao problema do escalonamento, que tem de responder de forma correcta a novos requisitos de execução imprevisíveis e rapidamente conjeturar o mapeamento de tarefas que melhor beneficie os critérios de performance do sistema. A técnica de escalonamento baseado em servidores permite reservar uma fração da capacidade de processamento para a execução de tarefas de tempo real, e assegurar que os efeitos de latência na sua execução não afectam as reservas estipuladas para outras execuções. No caso de tarefas escalonadas pelo tempo de execução máximo, ou tarefas com tempos de execução variáveis, torna-se provável que a largura de banda estipulada não seja consumida por completo. Para melhorar a utilização do sistema, os algoritmos de partilha de largura de banda (capacity-sharing) doam a capacidade não utilizada para a execução de outras tarefas, mantendo as garantias de isolamento entre servidores. Com eficiência comprovada em termos de espaço, tempo, e comunicação, o mecanismo de work-stealing tem vindo a ganhar popularidade como metodologia para o escalonamento de tarefas com paralelismo dinâmico e irregular. O algoritmo p-CSWS combina escalonamento baseado em servidores com capacity-sharing e work-stealing para cobrir as necessidades de escalonamento dos sistemas abertos de tempo real. Enquanto o escalonamento em servidores permite partilhar os recursos de processamento sem interferências a nível dos atrasos, uma nova política de work-stealing que opera sobre o mecanismo de capacity-sharing aplica uma exploração de paralelismo que melhora os tempos de resposta das aplicações e melhora a utilização do sistema. Esta tese propõe uma implementação do algoritmo p-CSWS para o Linux. Em concordância com a estrutura modular do escalonador do Linux, ´e definida uma nova classe de escalonamento que visa avaliar a aplicabilidade da heurística p-CSWS em circunstâncias reais. Ultrapassados os obstáculos intrínsecos `a programação da kernel do Linux, os extensos testes experimentais provam que o p-CSWS ´e mais do que um conceito teórico atrativo, e que a exploração heurística de paralelismo proposta pelo algoritmo beneficia os tempos de resposta das aplicações de tempo real, bem como a performance e eficiência da plataforma multiprocessador.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Geológica (Georrecursos)
Resumo:
In the last few years, we have observed an exponential increasing of the information systems, and parking information is one more example of them. The needs of obtaining reliable and updated information of parking slots availability are very important in the goal of traffic reduction. Also parking slot prediction is a new topic that has already started to be applied. San Francisco in America and Santander in Spain are examples of such projects carried out to obtain this kind of information. The aim of this thesis is the study and evaluation of methodologies for parking slot prediction and the integration in a web application, where all kind of users will be able to know the current parking status and also future status according to parking model predictions. The source of the data is ancillary in this work but it needs to be understood anyway to understand the parking behaviour. Actually, there are many modelling techniques used for this purpose such as time series analysis, decision trees, neural networks and clustering. In this work, the author explains the best techniques at this work, analyzes the result and points out the advantages and disadvantages of each one. The model will learn the periodic and seasonal patterns of the parking status behaviour, and with this knowledge it can predict future status values given a date. The data used comes from the Smart Park Ontinyent and it is about parking occupancy status together with timestamps and it is stored in a database. After data acquisition, data analysis and pre-processing was needed for model implementations. The first test done was with the boosting ensemble classifier, employed over a set of decision trees, created with C5.0 algorithm from a set of training samples, to assign a prediction value to each object. In addition to the predictions, this work has got measurements error that indicates the reliability of the outcome predictions being correct. The second test was done using the function fitting seasonal exponential smoothing tbats model. Finally as the last test, it has been tried a model that is actually a combination of the previous two models, just to see the result of this combination. The results were quite good for all of them, having error averages of 6.2, 6.6 and 5.4 in vacancies predictions for the three models respectively. This means from a parking of 47 places a 10% average error in parking slot predictions. This result could be even better with longer data available. In order to make this kind of information visible and reachable from everyone having a device with internet connection, a web application was made for this purpose. Beside the data displaying, this application also offers different functions to improve the task of searching for parking. The new functions, apart from parking prediction, were: - Park distances from user location. It provides all the distances to user current location to the different parks in the city. - Geocoding. The service for matching a literal description or an address to a concrete location. - Geolocation. The service for positioning the user. - Parking list panel. This is not a service neither a function, is just a better visualization and better handling of the information.
Resumo:
Zero valent iron nanoparticles (nZVI) are considered very promising for the remediation of contaminated soils and groundwaters. However, an important issue related to their limited mobility remains unsolved. Direct current can be used to enhance the nanoparticles transport, based on the same principles of electrokinetic remediation. In this work, a generalized physicochemical model was developed and solved numerically to describe the nZVI transport through porous media under electric field, and with different electrolytes (with different ionic strengths). The model consists of the Nernst–Planck coupled system of equations, which accounts for the mass balance of ionic species in a fluid medium, when both the diffusion and electromigration of the ions are considered. The diffusion and electrophoretic transport of the negatively charged nZVI particles were also considered in the system. The contribution of electroosmotic flow to the overall mass transport was included in the model for all cases. The nZVI effective mobility values in the porous medium are very low (10−7–10−4 cm2 V−1 s−1), due to the counterbalance between the positive electroosmotic flow and the electrophoretic transport of the negatively charged nanoparticles. The higher the nZVI concentration is in the matrix, the higher the aggregation; therefore, low concentration of nZVI suspensions must be used for successful field application.
Resumo:
The development of human cell models that recapitulate hepatic functionality allows the study of metabolic pathways involved in toxicity and disease. The increased biological relevance, cost-effectiveness and high-throughput of cell models can contribute to increase the efficiency of drug development in the pharmaceutical industry. Recapitulation of liver functionality in vitro requires the development of advanced culture strategies to mimic in vivo complexity, such as 3D culture, co-cultures or biomaterials. However, complex 3D models are typically associated with poor robustness, limited scalability and compatibility with screening methods. In this work, several strategies were used to develop highly functional and reproducible spheroid-based in vitro models of human hepatocytes and HepaRG cells using stirred culture systems. In chapter 2, the isolation of human hepatocytes from resected liver tissue was implemented and a liver tissue perfusion method was optimized towards the improvement of hepatocyte isolation and aggregation efficiency, resulting in an isolation protocol compatible with 3D culture. In chapter 3, human hepatocytes were co-cultivated with mesenchymal stem cells (MSC) and the phenotype of both cell types was characterized, showing that MSC acquire a supportive stromal function and hepatocytes retain differentiated hepatic functions, stability of drug metabolism enzymes and higher viability in co-cultures. In chapter 4, a 3D alginate microencapsulation strategy for the differentiation of HepaRG cells was evaluated and compared with the standard 2D DMSO-dependent differentiation, yielding higher differentiation efficiency, comparable levels of drug metabolism activity and significantly improved biosynthetic activity. The work developed in this thesis provides novel strategies for 3D culture of human hepatic cell models, which are reproducible, scalable and compatible with screening platforms. The phenotypic and functional characterization of the in vitro systems performed contributes to the state of the art of human hepatic cell models and can be applied to the improvement of pre-clinical drug development efficiency of the process, model disease and ultimately, development of cell-based therapeutic strategies for liver failure.