899 resultados para the SIMPLE algorithm
Resumo:
In this work we explore optimising parameters of a physical circuit model relative to input/output measurements, using the Dallas Rangemaster Treble Booster as a case study. A hybrid metaheuristic/gradient descent algorithm is implemented, where the initial parameter sets for the optimisation are informed by nominal values from schematics and datasheets. Sensitivity analysis is used to screen parameters, which informs a study of the optimisation algorithm against model complexity by fixing parameters. The results of the optimisation show a significant increase in the accuracy of model behaviour, but also highlight several key issues regarding the recovery of parameters.
Resumo:
BACKGROUND: A pretrial clinical improvement project for the BOOST-II UK trial of oxygen saturation targeting revealed an artefact affecting saturation profiles obtained from the Masimo Set Radical pulse oximeter.
METHODS: Saturation was recorded every 10 s for up to 2 weeks in 176 oxygen dependent preterm infants in 35 UK and Irish neonatal units between August 2006 and April 2009 using Masimo SET Radical pulse oximeters. Frequency distributions of % time at each saturation were plotted. An artefact affecting the saturation distribution was found to be attributable to the oximeter's internal calibration algorithm. Revised software was installed and saturation distributions obtained were compared with four other current oximeters in paired studies.
RESULTS: There was a reduction in saturation values of 87-90%. Values above 87% were elevated by up to 2%, giving a relative excess of higher values. The software revision eliminated this, improving the distribution of saturation values. In paired comparisons with four current commercially available oximeters, Masimo oximeters with the revised software returned similar saturation distributions.
CONCLUSIONS: A characteristic of the software algorithm reduces the frequency of saturations of 87-90% and increases the frequency of higher values returned by the Masimo SET Radical pulse oximeter. This effect, which remains within the recommended standards for accuracy, is removed by installing revised software (board firmware V4.8 or higher). Because this observation is likely to influence oxygen targeting, it should be considered in the analysis of the oxygen trial results to maximise their generalisability.
Resumo:
In Germany the upscaling algorithm is currently the standard approach for evaluating the PV power produced in a region. This method involves spatially interpolating the normalized power of a set of reference PV plants to estimate the power production by another set of unknown plants. As little information on the performances of this method could be found in the literature, the first goal of this thesis is to conduct an analysis of the uncertainty associated to this method. It was found that this method can lead to large errors when the set of reference plants has different characteristics or weather conditions than the set of unknown plants and when the set of reference plants is small. Based on these preliminary findings, an alternative method is proposed for calculating the aggregate power production of a set of PV plants. A probabilistic approach has been chosen by which a power production is calculated at each PV plant from corresponding weather data. The probabilistic approach consists of evaluating the power for each frequently occurring value of the parameters and estimating the most probable value by averaging these power values weighted by their frequency of occurrence. Most frequent parameter sets (e.g. module azimuth and tilt angle) and their frequency of occurrence have been assessed on the basis of a statistical analysis of parameters of approx. 35 000 PV plants. It has been found that the plant parameters are statistically dependent on the size and location of the PV plants. Accordingly, separate statistical values have been assessed for 14 classes of nominal capacity and 95 regions in Germany (two-digit zip-code areas). The performances of the upscaling and probabilistic approaches have been compared on the basis of 15 min power measurements from 715 PV plants provided by the German distribution system operator LEW Verteilnetz. It was found that the error of the probabilistic method is smaller than that of the upscaling method when the number of reference plants is sufficiently large (>100 reference plants in the case study considered in this chapter). When the number of reference plants is limited (<50 reference plants for the considered case study), it was found that the proposed approach provides a noticeable gain in accuracy with respect to the upscaling method.
Resumo:
The numbers of water-borne oomycete propagules in outdoor reservoirs used in horticultural nurseries within the UK are investigated in this study. Water samples were recovered from 11 different horticultural nurseries in the southern UK during Jan-May in two ‘cool’ years (2010.and 2013; winter temperatures 2.0 and 0.4oC below UK Met Office 30 year winter average respectively) and two ‘warm’ years (2008 and 2012; winter temperatures 1.2 and 0.9oC above UK Met Office 30 year winter average respectively). Samples were analysed for total number of oomycete colony forming units (CFU), predominantly members of the families Saprolegniaceae and Pythiaceae, and these were combined to give monthly mean counts. The numbers of CFU were investigated with respect to prevailing climate in the region: mean monthly air temperatures calculated by using daily observations from the nearest climatological station. The investigations show that the number of CFU during spring can be explained by a linear first-order equation and a statistically significant r2 value of 0.66 with the simple relationship: [CFU] = a(T-Tb )-b, where a is the rate of inoculum development with temperature T, and b is the baseload population at temperatures below Tb. Despite the majority of oomycete CFU detected being non-phytopathogenic members of the Saprolegniaceae, total oomycete CFU counts are still of considerable value as indicators of irrigation water treatment efficacy and cleanliness of storage tanks. The presence/absence of Pythium spp. was also determined for all samples tested, and Pythium CFU were found to be present in the majority, the exceptions all being particularly cold months (January and February 2010 and January 2008). A simple scenario study (+2 deg C) suggests that abundance of water-borne oomycetes during spring could be affected by increased temperatures due to climate change.
Resumo:
The mammalian binaural cue of interaural time difference (ITD) and cross-correlation have long been used to determine the point of origin of a sound source. The ITD can be defined as the different points in time at which a sound from a single location arrives at each individual ear [1]. From this time difference, the brain can calculate the angle of the sound source in relation to the head [2]. Cross-correlation compares the similarity of each channel of a binaural waveform producing the time lag or offset required for both channels to be in phase with one another. This offset corresponds to the maximum value produced by the cross-correlation function and can be used to determine the ITD and thus the azimuthal angle θ of the original sound source. However, in indoor environments, cross-correlation has been known to have problems with both sound reflections and reverberations. Additionally, cross-correlation has difficulties with localising short-term complex noises when they occur during a longer duration waveform, i.e. in the presence of background noise. The crosscorrelation algorithm processes the entire waveform and the short-term complex noise can be ignored. This paper presents a technique using thresholding which enables higher-localisation abilities for short-term complex sounds in the midst of background noise. To determine the success of this thresholding technique, twenty-five sounds were recorded in a dynamic and echoic environment. The twenty-five sounds consist of hand-claps, finger-clicks and speech. The proposed technique was compared to the regular cross-correlation function for the same waveforms, and an average of the azimuthal angles determined for each individual sample. The sound localisation ability for all twenty-five sound samples is as follows: average of the sampled angles using cross-correlation: 44%; cross-correlation technique with thresholding: 84%. From these results, it is clear that this proposed technique is very successful for the localisation of short-term complex sounds in the midst of background noise and in a dynamic and echoic indoor environment.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The secretaries of the secret were an essential element of the Holy Office’s district courts. They were in charge of record-ing and writing all of the official documents of these tribunals, but also of keeping the archive in order. And not only of these, so they were not the simple bureaucrats that the traditional historians wrote about. In fact, their long working hours turned a unique and much defined office into a complex taxonomy of professionals who shared the secretaries of the secret’s concerns but not their privileges. This paper aims to go in depth into these profession-als’ current life. They were not officials, but they take care of some important duties even if they were not getting paid for it.
Resumo:
Finance is one of the fastest growing areas in modern applied mathematics with real world applications. The interest of this branch of applied mathematics is best described by an example involving shares. Shareholders of a company receive dividends which come from the profit made by the company. The proceeds of the company, once it is taken over or wound up, will also be distributed to shareholders. Therefore shares have a value that reflects the views of investors about the likely dividend payments and capital growth of the company. Obviously such value will be quantified by the share price on stock exchanges. Therefore financial modelling serves to understand the correlations between asset and movements of buy/sell in order to reduce risk. Such activities depend on financial analysis tools being available to the trader with which he can make rapid and systematic evaluation of buy/sell contracts. There are other financial activities and it is not an intention of this paper to discuss all of these activities. The main concern of this paper is to propose a parallel algorithm for the numerical solution of an European option. This paper is organised as follows. First, a brief introduction is given of a simple mathematical model for European options and possible numerical schemes of solving such mathematical model. Second, Laplace transform is applied to the mathematical model which leads to a set of parametric equations where solutions of different parametric equations may be found concurrently. Numerical inverse Laplace transform is done by means of an inversion algorithm developed by Stehfast. The scalability of the algorithm in a distributed environment is demonstrated. Third, a performance analysis of the present algorithm is compared with a spatial domain decomposition developed particularly for time-dependent heat equation. Finally, a number of issues are discussed and future work suggested.
Resumo:
Multilevel algorithms are a successful class of optimisation techniques which address the mesh partitioning problem. They usually combine a graph contraction algorithm together with a local optimisation method which refines the partition at each graph level. In this paper we present an enhancement of the technique which uses imbalance to achieve higher quality partitions. We also present a formulation of the Kernighan-Lin partition optimisation algorithm which incorporates load-balancing. The resulting algorithm is tested against a different but related state-of the-art partitioner and shown to provide improved results.
Resumo:
Unfolding the Archive, an exhibition of new work by the international artists’ group Floating World, is the result of a collaboration between the National Irish Visual Arts Library (NIVAL) and the F.E. McWilliam Gallery & Studio in partnership with the NCAD Gallery at the National College of Art & Design. The exhibition takes its title from the tangible starting point for engagement with an archive – the simple act of unfolding – and the practice of appraisal, valuation and interpretation that is inherent in this process.
Resumo:
With the development of electronic devices, more and more mobile clients are connected to the Internet and they generate massive data every day. We live in an age of “Big Data”, and every day we generate hundreds of million magnitude data. By analyzing the data and making prediction, we can carry out better development plan. Unfortunately, traditional computation framework cannot meet the demand, so the Hadoop would be put forward. First the paper introduces the background and development status of Hadoop, compares the MapReduce in Hadoop 1.0 and YARN in Hadoop 2.0, and analyzes the advantages and disadvantages of them. Because the resource management module is the core role of YARN, so next the paper would research about the resource allocation module including the resource management, resource allocation algorithm, resource preemption model and the whole resource scheduling process from applying resource to finishing allocation. Also it would introduce the FIFO Scheduler, Capacity Scheduler, and Fair Scheduler and compare them. The main work has been done in this paper is researching and analyzing the Dominant Resource Fair algorithm of YARN, putting forward a maximum resource utilization algorithm based on Dominant Resource Fair algorithm. The paper also provides a suggestion to improve the unreasonable facts in resource preemption model. Emphasizing “fairness” during resource allocation is the core concept of Dominant Resource Fair algorithm of YARM. Because the cluster is multiple users and multiple resources, so the user’s resource request is multiple too. The DRF algorithm would divide the user’s resources into dominant resource and normal resource. For a user, the dominant resource is the one whose share is highest among all the request resources, others are normal resource. The DRF algorithm requires the dominant resource share of each user being equal. But for these cases where different users’ dominant resource amount differs greatly, emphasizing “fairness” is not suitable and can’t promote the resource utilization of the cluster. By analyzing these cases, this thesis puts forward a new allocation algorithm based on DRF. The new algorithm takes the “fairness” into consideration but not the main principle. Maximizing the resource utilization is the main principle and goal of the new algorithm. According to comparing the result of the DRF and new algorithm based on DRF, we found that the new algorithm has more high resource utilization than DRF. The last part of the thesis is to install the environment of YARN and use the Scheduler Load Simulator (SLS) to simulate the cluster environment.
Resumo:
The present document deals with the optimization of shape of aerodynamic profiles -- The objective is to reduce the drag coefficient on a given profile without penalising the lift coefficient -- A set of control points defining the geometry are passed and parameterized as a B-Spline curve -- These points are modified automatically by means of CFD analysis -- A given shape is defined by an user and a valid volumetric CFD domain is constructed from this planar data and a set of user-defined parameters -- The construction process involves the usage of 2D and 3D meshing algorithms that were coupled into own- code -- The volume of air surrounding the airfoil and mesh quality are also parametrically defined -- Some standard NACA profiles were used by obtaining first its control points in order to test the algorithm -- Navier-Stokes equations were solved for turbulent, steady-state ow of compressible uids using the k-epsilon model and SIMPLE algorithm -- In order to obtain data for the optimization process an utility to extract drag and lift data from the CFD simulation was added -- After a simulation is run drag and lift data are passed to the optimization process -- A gradient-based method using the steepest descent was implemented in order to define the magnitude and direction of the displacement of each control point -- The control points and other parameters defined as the design variables are iteratively modified in order to achieve an optimum -- Preliminary results on conceptual examples show a decrease in drag and a change in geometry that obeys to aerodynamic behavior principles
Resumo:
The present study was aimed at assessing the experience of a single referral center with recurrent varicose veins of the legs (RVL) over the period 1993-2008. Among a total of 846 procedures for Leg Varices (LV), 74 procedures were for RVL (8.7%). The causes of recurrence were classified as classic: insufficient crossectomy (13); incompetent perforating veins (13); reticular phlebectasia (22); small saphenous vein insufficiency (9); accessory saphenous veins (4); and particular: post-hemodynamic treatment (5); incomplete stripping (1); Sapheno-Femoral Junction (SFJ) vascularization (5); post-thermal ablation (2). For the “classic” RVL the treatment consisted essentially of completing the previous treatment, both if the problem was linked to an insufficient earlier treatment and if it was due to a later onset. The most common cause in our series was reticular phlebectasia; when the simple sclerosing injections are not sufficient, this was treated by phlebectomy according to Mueller. The “particular” cases classified as 1, 2 and 4 were also treated by completing the traditional stripping procedure (+ crossectomy if this had not been done previously), considered to be the gold standard. In the presence of a SFJ neo-vascularization, with or without cavernoma, approximately 5 cm of femoral vein were explored, the afferent vessels ligated and, if cavernoma was present, it was removed. Although inguinal neo-angiogenesis is a possible mechanism, some doubt can be raised as to its importance as a primary factor in causing recurrent varicose veins, rather than their being due to a preexisting vein left in situ because it was ignored, regarded as insignificant, or poorly evident. In conclusion, we stress that LV is a progressive disease, so the treatment is unlikely to be confined to a single procedure. It is important to plan adequate monitoring during follow-up, and to be ready to reoperate when new problems present that, if left, could lead the patient to doubt the validity and efficacy of the original treatment.
Resumo:
An algorithm based on a Bayesian network classifier was adapted to produce 10-day burned area (BA) maps from the Long Term Data Record Version 3 (LTDR) at a spatial resolution of 0.05° (~5 km) for the North American boreal region from 2001 to 2011. The modified algorithm used the Brightness Temperature channel from the Moderate Resolution Imaging Spectroradiometer (MODIS) band 31 T31 (11.03 μm) instead of the Advanced Very High Resolution Radiometer (AVHRR) band T3 (3.75 μm). The accuracy of the BA-LTDR, the Collection 5.1 MODIS Burned Area (MCD45A1), the MODIS Collection 5.1 Direct Broadcast Monthly Burned Area (MCD64A1) and the Burned Area GEOLAND-2 (BA GEOLAND-2) products was assessed using reference data from the Alaska Fire Service (AFS) and the Canadian Forest Service National Fire Database (CFSNFD). The linear regression analysis of the burned area percentages of the MCD64A1 product using 40 km × 40 km grids versus the reference data for the years from 2001 to 2011 showed an agreement of R2 = 0.84 and a slope = 0.76, while the BA-LTDR showed an agreement of R2 = 0.75 and a slope = 0.69. These results represent an improvement over the MCD45A1 product, which showed an agreement of R2 = 0.67 and a slope = 0.42. The MCD64A1, BA-LTDR and MCD45A1 products underestimated the total burned area in the study region, whereas the BA GEOLAND-2 product overestimated it by approximately five-fold, with an agreement of R2 = 0.05. Despite MCD64A1 showing the best overall results, the BA-LTDR product proved to be an alternative for mapping burned areas in the North American boreal forest region compared with the other global BA products, even those with higher spatial/spectral resolution
Resumo:
O problema de planejamento de rotas de robôs móveis consiste em determinar a melhor rota para um robô, em um ambiente estático e/ou dinâmico, que seja capaz de deslocá-lo de um ponto inicial até e um ponto final, também em conhecido como estado objetivo. O presente trabalho emprega o uso de uma abordagem baseada em Algoritmos Genéticos para o planejamento de rotas de múltiplos robôs em um ambiente complexo composto por obstáculos fixos e obstáculos moveis. Através da implementação do modelo no software do NetLogo, uma ferramenta utilizada em simulações de aplicações multiagentes, possibilitou-se a modelagem de robôs e obstáculos presentes no ambiente como agentes interativos, viabilizando assim o desenvolvimento de processos de detecção e desvio de obstáculos. A abordagem empregada busca pela melhor rota para robôs e apresenta um modelo composto pelos operadores básicos de reprodução e mutação, acrescido de um novo operador duplo de refinamento capaz de aperfeiçoar as melhores soluções encontradas através da eliminação de movimentos inúteis. Além disso, o calculo da rota de cada robô adota um método de geração de subtrechos, ou seja, não calcula apenas uma unica rota que conecta os pontos inicial e final do cenário, mas sim várias pequenas subrotas que conectadas formam um caminho único capaz de levar o robô ao estado objetivo. Neste trabalho foram desenvolvidos dois cenários, para avaliação da sua escalabilidade: o primeiro consiste em um cenário simples composto apenas por um robô, um obstáculo movel e alguns obstáculos fixos; já o segundo, apresenta um cenário mais robusto, mais amplo, composto por múltiplos robôs e diversos obstáculos fixos e moveis. Ao final, testes de desempenho comparativos foram efetuados entre a abordagem baseada em Algoritmos Genéticos e o Algoritmo A*. Como critério de comparação foi utilizado o tamanho das rotas obtidas nas vinte simulações executadas em cada abordagem. A analise dos resultados foi especificada através do Teste t de Student.