19 resultados para PV maximum power point (MPP) tracker (MPPT) algorithms
em Université de Lausanne, Switzerland
Resumo:
For the last 2 decades, supertree reconstruction has been an active field of research and has seen the development of a large number of major algorithms. Because of the growing popularity of the supertree methods, it has become necessary to evaluate the performance of these algorithms to determine which are the best options (especially with regard to the supermatrix approach that is widely used). In this study, seven of the most commonly used supertree methods are investigated by using a large empirical data set (in terms of number of taxa and molecular markers) from the worldwide flowering plant family Sapindaceae. Supertree methods were evaluated using several criteria: similarity of the supertrees with the input trees, similarity between the supertrees and the total evidence tree, level of resolution of the supertree and computational time required by the algorithm. Additional analyses were also conducted on a reduced data set to test if the performance levels were affected by the heuristic searches rather than the algorithms themselves. Based on our results, two main groups of supertree methods were identified: on one hand, the matrix representation with parsimony (MRP), MinFlip, and MinCut methods performed well according to our criteria, whereas the average consensus, split fit, and most similar supertree methods showed a poorer performance or at least did not behave the same way as the total evidence tree. Results for the super distance matrix, that is, the most recent approach tested here, were promising with at least one derived method performing as well as MRP, MinFlip, and MinCut. The output of each method was only slightly improved when applied to the reduced data set, suggesting a correct behavior of the heuristic searches and a relatively low sensitivity of the algorithms to data set sizes and missing data. Results also showed that the MRP analyses could reach a high level of quality even when using a simple heuristic search strategy, with the exception of MRP with Purvis coding scheme and reversible parsimony. The future of supertrees lies in the implementation of a standardized heuristic search for all methods and the increase in computing power to handle large data sets. The latter would prove to be particularly useful for promising approaches such as the maximum quartet fit method that yet requires substantial computing power.
Resumo:
The noise power spectrum (NPS) is the reference metric for understanding the noise content in computed tomography (CT) images. To evaluate the noise properties of clinical multidetector (MDCT) scanners, local 2D and 3D NPSs were computed for different acquisition reconstruction parameters.A 64- and a 128-MDCT scanners were employed. Measurements were performed on a water phantom in axial and helical acquisition modes. CT dose index was identical for both installations. Influence of parameters such as the pitch, the reconstruction filter (soft, standard and bone) and the reconstruction algorithm (filtered-back projection (FBP), adaptive statistical iterative reconstruction (ASIR)) were investigated. Images were also reconstructed in the coronal plane using a reformat process. Then 2D and 3D NPS methods were computed.In axial acquisition mode, the 2D axial NPS showed an important magnitude variation as a function of the z-direction when measured at the phantom center. In helical mode, a directional dependency with lobular shape was observed while the magnitude of the NPS was kept constant. Important effects of the reconstruction filter, pitch and reconstruction algorithm were observed on 3D NPS results for both MDCTs. With ASIR, a reduction of the NPS magnitude and a shift of the NPS peak to the low frequency range were visible. 2D coronal NPS obtained from the reformat images was impacted by the interpolation when compared to 2D coronal NPS obtained from 3D measurements.The noise properties of volume measured in last generation MDCTs was studied using local 3D NPS metric. However, impact of the non-stationarity noise effect may need further investigations.
Resumo:
A wide range of modelling algorithms is used by ecologists, conservation practitioners, and others to predict species ranges from point locality data. Unfortunately, the amount of data available is limited for many taxa and regions, making it essential to quantify the sensitivity of these algorithms to sample size. This is the first study to address this need by rigorously evaluating a broad suite of algorithms with independent presence-absence data from multiple species and regions. We evaluated predictions from 12 algorithms for 46 species (from six different regions of the world) at three sample sizes (100, 30, and 10 records). We used data from natural history collections to run the models, and evaluated the quality of model predictions with area under the receiver operating characteristic curve (AUC). With decreasing sample size, model accuracy decreased and variability increased across species and between models. Novel modelling methods that incorporate both interactions between predictor variables and complex response shapes (i.e. GBM, MARS-INT, BRUTO) performed better than most methods at large sample sizes but not at the smallest sample sizes. Other algorithms were much less sensitive to sample size, including an algorithm based on maximum entropy (MAXENT) that had among the best predictive power across all sample sizes. Relative to other algorithms, a distance metric algorithm (DOMAIN) and a genetic algorithm (OM-GARP) had intermediate performance at the largest sample size and among the best performance at the lowest sample size. No algorithm predicted consistently well with small sample size (n < 30) and this should encourage highly conservative use of predictions based on small sample size and restrict their use to exploratory modelling.
Resumo:
Context: Understanding the process through which adolescents and young adults are trying legal and illegal substances is a crucial point for the development of tailored prevention and treatment programs. However, patterns of substance first use can be very complex when multiple substances are considered, requiring reduction into a few meaningful number of categories. Data: We used data from a survey on adolescent and young adult health conducted in 2002 in Switzerland. Answers from 2212 subjects aged 19 and 20 were included. The first consumption ever of 10 substances (tobacco, cannabis, medicine to get high, sniff (volatile substances, and inhalants), ecstasy, GHB, LSD, cocaine, methadone, and heroin) was considered for a grand total of 516 different patterns. Methods: In a first step, automatic clustering was used to decrease the number of patterns to 50. Then, two groups of substance use experts, three social field workers, and three toxicologists and health professionals, were asked to reduce them into a maximum of 10 meaningful categories. Results: Classifications obtained through our methodology are of practical interest by revealing associations invisible to purely automatic algorithms. The article includes a detailed analysis of both final classifications, and a discussion on the advantages and limitations of our approach.
Advanced mapping of environmental data: Geostatistics, Machine Learning and Bayesian Maximum Entropy
Resumo:
This book combines geostatistics and global mapping systems to present an up-to-the-minute study of environmental data. Featuring numerous case studies, the reference covers model dependent (geostatistics) and data driven (machine learning algorithms) analysis techniques such as risk mapping, conditional stochastic simulations, descriptions of spatial uncertainty and variability, artificial neural networks (ANN) for spatial data, Bayesian maximum entropy (BME), and more.
Resumo:
Résumé Le μ-calcul est une extension de la logique modale par des opérateurs de point fixe. Dans ce travail nous étudions la complexité de certains fragments de cette logique selon deux points de vue, différents mais étroitement liés: l'un syntaxique (ou combinatoire) et l'autre topologique. Du point de vue syn¬taxique, les propriétés définissables dans ce formalisme sont classifiées selon la complexité combinatoire des formules de cette logique, c'est-à-dire selon le nombre d'alternances des opérateurs de point fixe. Comparer deux ensembles de modèles revient ainsi à comparer la complexité syntaxique des formules as¬sociées. Du point de vue topologique, les propriétés définissables dans cette logique sont comparées à l'aide de réductions continues ou selon leurs positions dans la hiérarchie de Borel ou dans celle projective. Dans la première partie de ce travail nous adoptons le point de vue syntax¬ique afin d'étudier le comportement du μ-calcul sur des classes restreintes de modèles. En particulier nous montrons que: (1) sur la classe des modèles symétriques et transitifs le μ-calcul est aussi expressif que la logique modale; (2) sur la classe des modèles transitifs, toute propriété définissable par une formule du μ-calcul est définissable par une formule sans alternance de points fixes, (3) sur la classe des modèles réflexifs, il y a pour tout η une propriété qui ne peut être définie que par une formule du μ-calcul ayant au moins η alternances de points fixes, (4) sur la classe des modèles bien fondés et transitifs le μ-calcul est aussi expressif que la logique modale. Le fait que le μ-calcul soit aussi expressif que la logique modale sur la classe des modèles bien fondés et transitifs est bien connu. Ce résultat est en ef¬fet la conséquence d'un théorème de point fixe prouvé indépendamment par De Jongh et Sambin au milieu des années 70. La preuve que nous donnons de l'effondrement de l'expressivité du μ-calcul sur cette classe de modèles est néanmoins indépendante de ce résultat. Par la suite, nous étendons le langage du μ-calcul en permettant aux opérateurs de point fixe de lier des occurrences négatives de variables libres. En montrant alors que ce formalisme est aussi ex¬pressif que le fragment modal, nous sommes en mesure de fournir une nouvelle preuve du théorème d'unicité des point fixes de Bernardi, De Jongh et Sambin et une preuve constructive du théorème d'existence de De Jongh et Sambin. RÉSUMÉ Pour ce qui concerne les modèles transitifs, du point de vue topologique cette fois, nous prouvons que la logique modale correspond au fragment borélien du μ-calcul sur cette classe des systèmes de transition. Autrement dit, nous vérifions que toute propriété définissable des modèles transitifs qui, du point de vue topologique, est une propriété borélienne, est nécessairement une propriété modale, et inversement. Cette caractérisation du fragment modal découle du fait que nous sommes en mesure de montrer que, modulo EF-bisimulation, un ensemble d'arbres est définissable dans la logique temporelle Ε F si et seulement il est borélien. Puisqu'il est possible de montrer que ces deux propriétés coïncident avec une caractérisation effective de la définissabilité dans la logique Ε F dans le cas des arbres à branchement fini donnée par Bojanczyk et Idziaszek [24], nous obtenons comme corollaire leur décidabilité. Dans une deuxième partie, nous étudions la complexité topologique d'un sous-fragment du fragment sans alternance de points fixes du μ-calcul. Nous montrons qu'un ensemble d'arbres est définissable par une formule de ce frag¬ment ayant au moins η alternances si et seulement si cette propriété se trouve au moins au n-ième niveau de la hiérarchie de Borel. Autrement dit, nous vérifions que pour ce fragment du μ-calcul, les points de vue topologique et combina- toire coïncident. De plus, nous décrivons une procédure effective capable de calculer pour toute propriété définissable dans ce langage sa position dans la hiérarchie de Borel, et donc le nombre d'alternances de points fixes nécessaires à la définir. Nous nous intéressons ensuite à la classification des ensembles d'arbres par réduction continue, et donnons une description effective de l'ordre de Wadge de la classe des ensembles d'arbres définissables dans le formalisme considéré. En particulier, la hiérarchie que nous obtenons a une hauteur (ωω)ω. Nous complétons ces résultats en décrivant un algorithme permettant de calculer la position dans cette hiérarchie de toute propriété définissable.
Resumo:
PURPOSE: Studies of diffuse large B-cell lymphoma (DLBCL) are typically evaluated by using a time-to-event approach with relapse, re-treatment, and death commonly used as the events. We evaluated the timing and type of events in newly diagnosed DLBCL and compared patient outcome with reference population data. PATIENTS AND METHODS: Patients with newly diagnosed DLBCL treated with immunochemotherapy were prospectively enrolled onto the University of Iowa/Mayo Clinic Specialized Program of Research Excellence Molecular Epidemiology Resource (MER) and the North Central Cancer Treatment Group NCCTG-N0489 clinical trial from 2002 to 2009. Patient outcomes were evaluated at diagnosis and in the subsets of patients achieving event-free status at 12 months (EFS12) and 24 months (EFS24) from diagnosis. Overall survival was compared with age- and sex-matched population data. Results were replicated in an external validation cohort from the Groupe d'Etude des Lymphomes de l'Adulte (GELA) Lymphome Non Hodgkinien 2003 (LNH2003) program and a registry based in Lyon, France. RESULTS: In all, 767 patients with newly diagnosed DLBCL who had a median age of 63 years were enrolled onto the MER and NCCTG studies. At a median follow-up of 60 months (range, 8 to 116 months), 299 patients had an event and 210 patients had died. Patients achieving EFS24 had an overall survival equivalent to that of the age- and sex-matched general population (standardized mortality ratio [SMR], 1.18; P = .25). This result was confirmed in 820 patients from the GELA study and registry in Lyon (SMR, 1.09; P = .71). Simulation studies showed that EFS24 has comparable power to continuous EFS when evaluating clinical trials in DLBCL. CONCLUSION: Patients with DLBCL who achieve EFS24 have a subsequent overall survival equivalent to that of the age- and sex-matched general population. EFS24 will be useful in patient counseling and should be considered as an end point for future studies of newly diagnosed DLBCL.
Resumo:
In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.
Resumo:
MRI tractography is the mapping of neural fiber pathways based on diffusion MRI of tissue diffusion anisotropy. Tractography based on diffusion tensor imaging (DTI) cannot directly image multiple fiber orientations within a single voxel. To address this limitation, diffusion spectrum MRI (DSI) and related methods were developed to image complex distributions of intravoxel fiber orientation. Here we demonstrate that tractography based on DSI has the capacity to image crossing fibers in neural tissue. DSI was performed in formalin-fixed brains of adult macaque and in the brains of healthy human subjects. Fiber tract solutions were constructed by a streamline procedure, following directions of maximum diffusion at every point, and analyzed in an interactive visualization environment (TrackVis). We report that DSI tractography accurately shows the known anatomic fiber crossings in optic chiasm, centrum semiovale, and brainstem; fiber intersections in gray matter, including cerebellar folia and the caudate nucleus; and radial fiber architecture in cerebral cortex. In contrast, none of these examples of fiber crossing and complex structure was identified by DTI analysis of the same data sets. These findings indicate that DSI tractography is able to image crossing fibers in neural tissue, an essential step toward non-invasive imaging of connectional neuroanatomy.
Resumo:
HIV virulence, i.e. the time of progression to AIDS, varies greatly among patients. As for other rapidly evolving pathogens of humans, it is difficult to know if this variance is controlled by the genotype of the host or that of the virus because the transmission chain is usually unknown. We apply the phylogenetic comparative approach (PCA) to estimate the heritability of a trait from one infection to the next, which indicates the control of the virus genotype over this trait. The idea is to use viral RNA sequences obtained from patients infected by HIV-1 subtype B to build a phylogeny, which approximately reflects the transmission chain. Heritability is measured statistically as the propensity for patients close in the phylogeny to exhibit similar infection trait values. The approach reveals that up to half of the variance in set-point viral load, a trait associated with virulence, can be heritable. Our estimate is significant and robust to noise in the phylogeny. We also check for the consistency of our approach by showing that a trait related to drug resistance is almost entirely heritable. Finally, we show the importance of taking into account the transmission chain when estimating correlations between infection traits. The fact that HIV virulence is, at least partially, heritable from one infection to the next has clinical and epidemiological implications. The difference between earlier studies and ours comes from the quality of our dataset and from the power of the PCA, which can be applied to large datasets and accounts for within-host evolution. The PCA opens new perspectives for approaches linking clinical data and evolutionary biology because it can be extended to study other traits or other infectious diseases.
Resumo:
SummaryDiscrete data arise in various research fields, typically when the observations are count data.I propose a robust and efficient parametric procedure for estimation of discrete distributions. The estimation is done in two phases. First, a very robust, but possibly inefficient, estimate of the model parameters is computed and used to indentify outliers. Then the outliers are either removed from the sample or given low weights, and a weighted maximum likelihood estimate (WML) is computed.The weights are determined via an adaptive process such that if the data follow the model, then asymptotically no observation is downweighted.I prove that the final estimator inherits the breakdown point of the initial one, and that its influence function at the model is the same as the influence function of the maximum likelihood estimator, which strongly suggests that it is asymptotically fully efficient.The initial estimator is a minimum disparity estimator (MDE). MDEs can be shown to have full asymptotic efficiency, and some MDEs have very high breakdown points and very low bias under contamination. Several initial estimators are considered, and the performances of the WMLs based on each of them are studied.It results that in a great variety of situations the WML substantially improves the initial estimator, both in terms of finite sample mean square error and in terms of bias under contamination. Besides, the performances of the WML are rather stable under a change of the MDE even if the MDEs have very different behaviors.Two examples of application of the WML to real data are considered. In both of them, the necessity for a robust estimator is clear: the maximum likelihood estimator is badly corrupted by the presence of a few outliers.This procedure is particularly natural in the discrete distribution setting, but could be extended to the continuous case, for which a possible procedure is sketched.RésuméLes données discrètes sont présentes dans différents domaines de recherche, en particulier lorsque les observations sont des comptages.Je propose une méthode paramétrique robuste et efficace pour l'estimation de distributions discrètes. L'estimation est faite en deux phases. Tout d'abord, un estimateur très robuste des paramètres du modèle est calculé, et utilisé pour la détection des données aberrantes (outliers). Cet estimateur n'est pas nécessairement efficace. Ensuite, soit les outliers sont retirés de l'échantillon, soit des faibles poids leur sont attribués, et un estimateur du maximum de vraisemblance pondéré (WML) est calculé.Les poids sont déterminés via un processus adaptif, tel qu'asymptotiquement, si les données suivent le modèle, aucune observation n'est dépondérée.Je prouve que le point de rupture de l'estimateur final est au moins aussi élevé que celui de l'estimateur initial, et que sa fonction d'influence au modèle est la même que celle du maximum de vraisemblance, ce qui suggère que cet estimateur est pleinement efficace asymptotiquement.L'estimateur initial est un estimateur de disparité minimale (MDE). Les MDE sont asymptotiquement pleinement efficaces, et certains d'entre eux ont un point de rupture très élevé et un très faible biais sous contamination. J'étudie les performances du WML basé sur différents MDEs.Le résultat est que dans une grande variété de situations le WML améliore largement les performances de l'estimateur initial, autant en terme du carré moyen de l'erreur que du biais sous contamination. De plus, les performances du WML restent assez stables lorsqu'on change l'estimateur initial, même si les différents MDEs ont des comportements très différents.Je considère deux exemples d'application du WML à des données réelles, où la nécessité d'un estimateur robuste est manifeste : l'estimateur du maximum de vraisemblance est fortement corrompu par la présence de quelques outliers.La méthode proposée est particulièrement naturelle dans le cadre des distributions discrètes, mais pourrait être étendue au cas continu.
Resumo:
AIM: To document the feasibility and report the results of dosing darbepoetin-alpha at extended intervals up to once monthly (QM) in a large dialysis patient population. MATERIAL: 175 adult patients treated, at 23 Swiss hemodialysis centres, with stable doses of any erythropoiesis-stimulating agent who were switched by their physicians to darbepoetin-alpha treatment at prolonged dosing intervals (every 2 weeks [Q2W] or QM). METHOD: Multicentre, prospective, observational study. Patients' hemoglobin (Hb) levels and other data were recorded 1 month before conversion (baseline) to an extended darbepoetin-alpha dosing interval, at the time of conversion, and once monthly thereafter up to the evaluation point (maximum of 12 months or until loss to follow-up). RESULTS: Data for 161 evaluable patients from 23 sites were included in the final analysis. At 1 month prior to conversion, 73% of these patients were receiving darbepoetin-alpha weekly (QW) and 27% of the patients biweekly (Q2W). After a mean follow-up of 9.5 months, 34% received a monthly (QM) dosing regimen, 52% of the patients were receiving darbepoetin-alpha Q2W, and 14% QW. The mean (SD) Hb concentration at baseline was 12.3 +/- 1.2 g/dl, compared to 11.9 +/- 1.2 g/dl at the evaluation point. The corresponding mean weekly darbepoetin-alpha dose was 44.3 +/- 33.4 microg at baseline and 37.7 +/- 30.8 microg at the evaluation point. CONCLUSIONS: Conversion to extended darbepoetin-alpha dosing intervals of up to QM, with maintenance of initial Hb concentrations, was successful for the majority of stable dialysis patients.
Resumo:
The state of the art to describe image quality in medical imaging is to assess the performance of an observer conducting a task of clinical interest. This can be done by using a model observer leading to a figure of merit such as the signal-to-noise ratio (SNR). Using the non-prewhitening (NPW) model observer, we objectively characterised the evolution of its figure of merit in various acquisition conditions. The NPW model observer usually requires the use of the modulation transfer function (MTF) as well as noise power spectra. However, although the computation of the MTF poses no problem when dealing with the traditional filtered back-projection (FBP) algorithm, this is not the case when using iterative reconstruction (IR) algorithms, such as adaptive statistical iterative reconstruction (ASIR) or model-based iterative reconstruction (MBIR). Given that the target transfer function (TTF) had already shown it could accurately express the system resolution even with non-linear algorithms, we decided to tune the NPW model observer, replacing the standard MTF by the TTF. It was estimated using a custom-made phantom containing cylindrical inserts surrounded by water. The contrast differences between the inserts and water were plotted for each acquisition condition. Then, mathematical transformations were performed leading to the TTF. As expected, the first results showed a dependency of the image contrast and noise levels on the TTF for both ASIR and MBIR. Moreover, FBP also proved to be dependent of the contrast and noise when using the lung kernel. Those results were then introduced in the NPW model observer. We observed an enhancement of SNR every time we switched from FBP to ASIR to MBIR. IR algorithms greatly improve image quality, especially in low-dose conditions. Based on our results, the use of MBIR could lead to further dose reduction in several clinical applications.
Resumo:
When decommissioning a nuclear facility it is important to be able to estimate activity levels of potentially radioactive samples and compare with clearance values defined by regulatory authorities. This paper presents a method of calibrating a clearance box monitor based on practical experimental measurements and Monte Carlo simulations. Adjusting the simulation for experimental data obtained using a simple point source permits the computation of absolute calibration factors for more complex geometries with an accuracy of a bit more than 20%. The uncertainty of the calibration factor can be improved to about 10% when the simulation is used relatively, in direct comparison with a measurement performed in the same geometry but with another nuclide. The simulation can also be used to validate the experimental calibration procedure when the sample is supposed to be homogeneous but the calibration factor is derived from a plate phantom. For more realistic geometries, like a small gravel dumpster, Monte Carlo simulation shows that the calibration factor obtained with a larger homogeneous phantom is correct within about 20%, if sample density is taken as the influencing parameter. Finally, simulation can be used to estimate the effect of a contamination hotspot. The research supporting this paper shows that activity could be largely underestimated in the event of a centrally-located hotspot and overestimated for a peripherally-located hotspot if the sample is assumed to be homogeneously contaminated. This demonstrates the usefulness of being able to complement experimental methods with Monte Carlo simulations in order to estimate calibration factors that cannot be directly measured because of a lack of available material or specific geometries.