857 resultados para optimisation algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we design and develop several filtering strategies for the analysis of data generated by a resonant bar gravitational wave (GW) antenna, with the goal of assessing the presence (or absence) therein of long-duration monochromatic GW signals, as well as the eventual amplitude and frequency of the signals, within the sensitivity band of the detector. Such signals are most likely generated in the fast rotation of slightly asymmetric spinning stars. We develop practical procedures, together with a study of their statistical properties, which will provide us with useful information on the performance of each technique. The selection of candidate events will then be established according to threshold-crossing probabilities, based on the Neyman-Pearson criterion. In particular, it will be shown that our approach, based on phase estimation, presents a better signal-to-noise ratio than does pure spectral analysis, the most common approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This master’s thesis aims to study and represent from literature how evolutionary algorithms are used to solve different search and optimisation problems in the area of software engineering. Evolutionary algorithms are methods, which imitate the natural evolution process. An artificial evolution process evaluates fitness of each individual, which are solution candidates. The next population of candidate solutions is formed by using the good properties of the current population by applying different mutation and crossover operations. Different kinds of evolutionary algorithm applications related to software engineering were searched in the literature. Applications were classified and represented. Also the necessary basics about evolutionary algorithms were presented. It was concluded, that majority of evolutionary algorithm applications related to software engineering were about software design or testing. For example, there were applications about classifying software production data, project scheduling, static task scheduling related to parallel computing, allocating modules to subsystems, N-version programming, test data generation and generating an integration test order. Many applications were experimental testing rather than ready for real production use. There were also some Computer Aided Software Engineering tools based on evolutionary algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several possible methods of increasing the efficiency and power of hydro power plants by improving the flow passages are investigated in this stydy. The theoretical background of diffuser design and its application to the optimisation of hydraulic turbine draft tubes is presented in the first part of this study. Several draft tube modernisation projects that have been carried out recently are discussed. Also, a method of increasing the efficiency of the draft tube by injecting a high velocity jet into the boundary layer is presented. Methods of increasing the head of a hydro power plant by using an ejector or a jet pump are discussed in the second part of this work. The theoretical principles of various ejector and jet pump types are presented and four different methods of calculating them are examined in more detail. A self-made computer code is used to calculate the gain in the head for two example power plants. Suitable ejector installations for the example plants are also discussed. The efficiency of the ejector power was found to be in the range 6 - 15 % for conventional head increasers, and 30 % for the jet pump at its optimum operating point. In practice, it is impossible to install an optimised jet pump with a 30 % efficiency into the draft tube as this would considerabely reduce the efficiency of the draft tube at normal operating conditions. This demonstrates, however, the potential for improvement which lies in conventional head increaser technology. This study is based on previous publications and on published test results. No actual laboratory measurements were made for this study. Certain aspects of modelling the flow in the draft tube using computational fluid dynamics are discussed in the final part of this work. The draft tube inlet velocity field is a vital boundary condition for such a calculation. Several previously measured velocity fields that have successfully been utilised in such flow calculations are presented herein.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: HIV surveillance requires monitoring of new HIV diagnoses and differentiation of incident and older infections. In 2008, Switzerland implemented a system for monitoring incident HIV infections based on the results of a line immunoassay (Inno-Lia) mandatorily conducted for HIV confirmation and type differentiation (HIV-1, HIV-2) of all newly diagnosed patients. Based on this system, we assessed the proportion of incident HIV infection among newly diagnosed cases in Switzerland during 2008-2013. METHODS AND RESULTS: Inno-Lia antibody reaction patterns recorded in anonymous HIV notifications to the federal health authority were classified by 10 published algorithms into incident (up to 12 months) or older infections. Utilizing these data, annual incident infection estimates were obtained in two ways, (i) based on the diagnostic performance of the algorithms and utilizing the relationship 'incident = true incident + false incident', (ii) based on the window-periods of the algorithms and utilizing the relationship 'Prevalence = Incidence x Duration'. From 2008-2013, 3'851 HIV notifications were received. Adult HIV-1 infections amounted to 3'809 cases, and 3'636 of them (95.5%) contained Inno-Lia data. Incident infection totals calculated were similar for the performance- and window-based methods, amounting on average to 1'755 (95% confidence interval, 1588-1923) and 1'790 cases (95% CI, 1679-1900), respectively. More than half of these were among men who had sex with men. Both methods showed a continuous decline of annual incident infections 2008-2013, totaling -59.5% and -50.2%, respectively. The decline of incident infections continued even in 2012, when a 15% increase in HIV notifications had been observed. This increase was entirely due to older infections. Overall declines 2008-2013 were of similar extent among the major transmission groups. CONCLUSIONS: Inno-Lia based incident HIV-1 infection surveillance proved useful and reliable. It represents a free, additional public health benefit of the use of this relatively costly test for HIV confirmation and type differentiation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Lung clearance index (LCI), a marker of ventilation inhomogeneity, is elevated early in children with cystic fibrosis (CF). However, in infants with CF, LCI values are found to be normal, although structural lung abnormalities are often detectable. We hypothesized that this discrepancy is due to inadequate algorithms of the available software package. AIM: Our aim was to challenge the validity of these software algorithms. METHODS: We compared multiple breath washout (MBW) results of current software algorithms (automatic modus) to refined algorithms (manual modus) in 17 asymptomatic infants with CF, and 24 matched healthy term-born infants. The main difference between these two analysis methods lies in the calculation of the molar mass differences that the system uses to define the completion of the measurement. RESULTS: In infants with CF the refined manual modus revealed clearly elevated LCI above 9 in 8 out of 35 measurements (23%), all showing LCI values below 8.3 using the automatic modus (paired t-test comparing the means, P < 0.001). Healthy infants showed normal LCI values using both analysis methods (n = 47, paired t-test, P = 0.79). The most relevant reason for false normal LCI values in infants with CF using the automatic modus was the incorrect recognition of the end-of-test too early during the washout. CONCLUSION: We recommend the use of the manual modus for the analysis of MBW outcomes in infants in order to obtain more accurate results. This will allow appropriate use of infant lung function results for clinical and scientific purposes. Pediatr Pulmonol. 2015; 50:970-977. © 2015 Wiley Periodicals, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Evaluation of image quality (IQ) in Computed Tomography (CT) is important to ensure that diagnostic questions are correctly answered, whilst keeping radiation dose to the patient as low as is reasonably possible. The assessment of individual aspects of IQ is already a key component of routine quality control of medical x-ray devices. These values together with standard dose indicators can be used to give rise to 'figures of merit' (FOM) to characterise the dose efficiency of the CT scanners operating in certain modes. The demand for clinically relevant IQ characterisation has naturally increased with the development of CT technology (detectors efficiency, image reconstruction and processing), resulting in the adaptation and evolution of assessment methods. The purpose of this review is to present the spectrum of various methods that have been used to characterise image quality in CT: from objective measurements of physical parameters to clinically task-based approaches (i.e. model observer (MO) approach) including pure human observer approach. When combined together with a dose indicator, a generalised dose efficiency index can be explored in a framework of system and patient dose optimisation. We will focus on the IQ methodologies that are required for dealing with standard reconstruction, but also for iterative reconstruction algorithms. With this concept the previously used FOM will be presented with a proposal to update them in order to make them relevant and up to date with technological progress. The MO that objectively assesses IQ for clinically relevant tasks represents the most promising method in terms of radiologist sensitivity performance and therefore of most relevance in the clinical environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chez les patients cancéreux, les cellules malignes sont souvent reconnues et détruites par les cellules T cytotoxiques du patient. C'est pourquoi, depuis plusieurs années, des recherches visent à produire des vaccins sensibilisant les cellules de l'immunité adaptative, afin de prévenir certains cancers. Bien que les vaccins ciblant les cellules T CD8+ (cytotoxiques) ont une efficacité in-vitro élevée, un vaccin pouvant cibler les cellules T CD8+ et CD4+ aurait une plus grande efficacité (1-3). En effet, les cellules T helper (CD4+) favorisent la production et la maintenance des cellules T CD8+ mémoires à longue durée de vie. Il existe un grand nombre de sous-types de cellules T CD4+ et leur action envers les cellules cancéreuses est différente. Par exemple, les lymphocytes Treg ont une activité pro-tumorale importante (4) et les lymphocytes Th1 ont une activité anti-tumorale (5). Cependant, le taux naturel des différents sous-types de cellules T CD4+ spécifiques aux antigènes tumoraux est variable. De plus, une certaine flexibilité des différents sous-types de cellules T CD4+ a été récemment démontrée (6). Celle-ci pourrait être ciblée par des protocoles de vaccination avec des antigènes tumoraux administrés conjointement à des adjuvants définis. Pour cela, il faut approfondir les connaissances sur le rôle des cellules T CD4+ spécifiques aux antigènes dans l'immunité anti-tumorale et connaître précisément la proportion des sous-types de cellules T CD4+ activées avant et après la vaccination. L'analyse des cellules T, par la cytométrie de flux, est très souvent limité par le besoin d'un nombre très élevé de cellules pour l'analyse de l'expression protéique. Or dans l'analyse des cellules T CD4+ spécifiques aux antigènes tumoraux cette technique n'est souvent pas applicable, car ces cellules sont présentes en très faible quantité dans le sang et dans les tissus tumoraux. C'est pourquoi, une approche basée sur l'analyse de la cellule T individuelle a été mise en place afin d'étudier l'expression du profil génétique des cellules T CD8+ et CD4+. (7,8) Méthode : Ce nouveau protocole (« single cell ») a été élaboré à partir d'une modification du protocole PCR-RT, qui permet la détection spécifique de l'ADN complémentaire (ADNc) après la transcription globale de l'ARN messager (ARNm) exprimé par une cellule T individuelle. Dans ce travail, nous optimisons cette nouvelle technique d'analyse pour les cellules T CD4+, en sélectionnant les meilleures amorces. Tout d'abord, des clones à profils fonctionnels connus sont générés par cytométrie de flux à partir de cellules T CD4+ d'un donneur sain. Pour cette étape d'optimisation des amorces, la spécificité des cellules T CD4+ n'est pas prise en considération. Il est, donc, possible d'étudier et de trier ces clones par cytométrie de flux. Ensuite, grâce au protocole « single cell », nous testons par PCR les amorces des différents facteurs spécifiques de chaque sous-type des T CD4+ sur des aliquotes issus d'une cellule provenant des clones générés. Nous sélectionnons les amorces dont la sensibilité, la spécificité ainsi que les valeurs prédictives positives et négatives des tests sont les meilleures. (9) Conclusion : Durant ce travail nous avons généré de l'ADNc de cellules T individuelles et sélectionné douze paires d'amorces pour l'identification des sous-types de cellules T CD4+ par la technique d'analyse PCR « single cell ». Les facteurs spécifiques aux cellules Th2 : IL-4, IL-5, IL-13, CRTh2, GATA3 ; les facteurs spécifiques aux cellules Th1 : TNFα, IL-2 ; les facteurs spécifiques aux cellules Treg : FOXP3, IL-2RA ; les facteurs spécifiques aux cellules Th17 : RORC, CCR6 et un facteur spécifique aux cellules naïves : CCR7. Ces amorces peuvent être utilisées dans le futur en combinaison avec des cellules antigènes-spécifiques triées par marquage des multimères pMHCII. Cette méthode permettra de comprendre le rôle ainsi que l'amplitude et la diversité fonctionnelle de la réponse de la cellule T CD4+ antigène-spécifique dans les cancers et dans d'autres maladies. Cela afin d'affiner les recherches en immunothérapie oncologique. (8)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Network virtualisation is considerably gaining attentionas a solution to ossification of the Internet. However, thesuccess of network virtualisation will depend in part on how efficientlythe virtual networks utilise substrate network resources.In this paper, we propose a machine learning-based approachto virtual network resource management. We propose to modelthe substrate network as a decentralised system and introducea learning algorithm in each substrate node and substrate link,providing self-organization capabilities. We propose a multiagentlearning algorithm that carries out the substrate network resourcemanagement in a coordinated and decentralised way. The taskof these agents is to use evaluative feedback to learn an optimalpolicy so as to dynamically allocate network resources to virtualnodes and links. The agents ensure that while the virtual networkshave the resources they need at any given time, only the requiredresources are reserved for this purpose. Simulations show thatour dynamic approach significantly improves the virtual networkacceptance ratio and the maximum number of accepted virtualnetwork requests at any time while ensuring that virtual networkquality of service requirements such as packet drop rate andvirtual link delay are not affected.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data traffic caused by mobile advertising client software when it is communicating with the network server can be a pain point for many application developers who are considering advertising-funded application distribution, since the cost of the data transfer might scare their users away from using the applications. For the thesis project, a simulation environment was built to mimic the real client-server solution for measuring the data transfer over varying types of connections with different usage scenarios. For optimising data transfer, a few general-purpose compressors and XML-specific compressors were tried for compressing the XML data, and a few protocol optimisations were implemented. For optimising the cost, cache usage was improved and pre-loading was enhanced to use free connections to load the data. The data traffic structure and the various optimisations were analysed, and it was found that the cache usage and pre-loading should be enhanced and that the protocol should be changed, with report aggregation and compression using WBXML or gzip.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the literature on housing market areas, different approaches can be found to defining them, for example, using travel-to-work areas and, more recently, making use of migration data. Here we propose a simple exercise to shed light on which approach performs better. Using regional data from Catalonia, Spain, we have computed housing market areas with both commuting data and migration data. In order to decide which procedure shows superior performance, we have looked at uniformity of prices within areas. The main finding is that commuting algorithms present more homogeneous areas in terms of housing prices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fine powders of minerals are used commonly in the paper and paint industry, and for ceramics. Research for utilizing of different waste materials in these applications is environmentally important. In this work, the ultrafine grinding of two waste gypsum materials, namely FGD (Flue Gas Desulphurisation) gypsum and phosphogypsum from a phosphoric acid plant, with the attrition bead mill and with the jet mill has been studied. The ' objective of this research was to test the suitability of the attrition bead mill and of the jet mill to produce gypsum powders with a particle size of a few microns. The grinding conditions were optimised by studying the influences of different operational grinding parameters on the grinding rate and on the energy consumption of the process in order to achieve a product fineness such as that required in the paper industry with as low energy consumption as possible. Based on experimental results, the most influential parameters in the attrition grinding were found to be the bead size, the stirrer type, and the stirring speed. The best conditions, based on the product fineness and specific energy consumption of grinding, for the attrition grinding process is to grind the material with small grinding beads and a high rotational speed of the stirrer. Also, by using some suitable grinding additive, a finer product is achieved with a lower energy consumption. In jet mill grinding the most influential parameters were the feed rate, the volumetric flow rate of the grinding air, and the height of the internal classification tube. The optimised condition for the jet is to grind with a small feed rate and with a large rate of volumetric flow rate of grinding air when the inside tube is low. The finer product with a larger rate of production was achieved with the attrition bead mill than with the jet mill, thus the attrition grinding is better for the ultrafine grinding of gypsum than the jet grinding. Finally the suitability of the population balance model for simulation of grinding processes has been studied with different S , B , and C functions. A new S function for the modelling of an attrition mill and a new C function for the modelling of a jet mill were developed. The suitability of the selected models with the developed grinding functions was tested by curve fitting the particle size distributions of the grinding products and then comparing the fitted size distributions to the measured particle sizes. According to the simulation results, the models are suitable for the estimation and simulation of the studied grinding processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The preparation of 2', 3'-di-O-hexanoyluridine (2) by a Candida antarctica B lipase-catalysed alcoholysis of 2', 3', 5'-tri-O-hexanoyluridine (1) was optimised using an experimental design. At 25 ºC better experimental conditions allowed an increase in the yield of 2 from 80% to 96%. In addition to the yield improvement, the volume reaction could be diminished in a factor of 5 and the reaction time significantly shortened.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Identification of order of an Autoregressive Moving Average Model (ARMA) by the usual graphical method is subjective. Hence, there is a need of developing a technique to identify the order without employing the graphical investigation of series autocorrelations. To avoid subjectivity, this thesis focuses on determining the order of the Autoregressive Moving Average Model using Reversible Jump Markov Chain Monte Carlo (RJMCMC). The RJMCMC selects the model from a set of the models suggested by better fitting, standard deviation errors and the frequency of accepted data. Together with deep analysis of the classical Box-Jenkins modeling methodology the integration with MCMC algorithms has been focused through parameter estimation and model fitting of ARMA models. This helps to verify how well the MCMC algorithms can treat the ARMA models, by comparing the results with graphical method. It has been seen that the MCMC produced better results than the classical time series approach.