955 resultados para Maximum power point tracker (MPPT)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: The aim of this work was to study the central and peripheral thickness of several contact lenses (CL) with different powers and analyze how thickness variation affects CL oxygen transmissibility. METHODS: Four daily disposable and five monthly or biweekly CL were studied. The powers of each CL were: the maximum negative power of each brand; -6.00 D; -3.00 D; zero power (-0.25 D or -0.50 D), +3.00 D and +6.00 D. Central and peripheral thicknesses were measured with an electronic thickness gauge. Each lens was measured five times (central and 3mm paracentral) and the mean value was considered. Using the values of oxygen permeability given by the manufacturers and the measured thicknesses, the variation of oxygen transmissibility with lens power was determined. RESULTS: For monthly or biweekly lenses, central thickness changed between 0.061 ± 0.002 mm and 0.243 ± 0.002 mm, and peripheral thickness varied between 0.084 ± 0.002 mm and 0.231 ± 0.015 mm. Daily disposable lenses showed central values ranging between 0.056 ± 0.0016 mm and 0.205 ± 0.002 mm and peripheral values between 0.108 ± 0.05 and 0.232 ± 0.011 mm. Oxygen transmissibility (in units) of monthly or biweekly CL ranged between 39.4 ± 0.3 and 246.0 ± 14.4 and for daily disposable lenses the values range between 9.5 ± 0.5 and 178.1 ± 5.1. CONCLUSIONS: The central and peripheral thicknesses change significantly when considering the CL power and this has a significant impact on the oxygen transmissibility. Eyecare practitioners must have this fact in account when high power plus or minus lenses are fitted or when continuous wear is considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La carte postale est un kaléidoscope de vues, d’ornements et de couleurs, qui consacre un tout petit espace au message. C’est à la photographie et aux procédés de reproduction photomécaniques que revient le mérite d’avoir industrialisé la production de la carte postale. Et ce sont les clichés de villes, avec leurs monuments et leurs paysages, qui confèrent à la carte postale son statut de moyen de communication de masse et qui lui concèdent une affinité avec l’industrie du tourisme. La carte postale s’est ainsi emparée de l’ambition photographique de reproduire le monde, s’alliant aux « besoins de l’exploration, des expéditions et des relevés topographiques » du médium photographique à ses débuts. Ayant comme point de départ la carte postale, notre objectif est de montrer les conséquences culturelles de la révolution optique, commencée au milieu du XIXe siècle, avec l’invention de l’appareil photo, et consumée dans la seconde moitié du XXe siècle, avec l’apparition de l’ordinateur. En effet, depuis l’apparition de l’appareil photographique et des cartes postales jusqu’au flux de pixels de Google Images et aux images satellite de Google Earth, un entrelacement de territoire, puissance et technique a été mis en oeuvre, la terre devenant, en conséquence, de plus en plus auscultée par les appareils de vision, ce qui impacte sur la perception de l’espace. Nous espérons pouvoir montrer avec cette étude que la lettre traditionnelle est à l’email ce que la carte postale est au post que l’on publie dans un blog ou dans des réseaux comme Facebook et Twitter. À notre sens, les cartes postales correspondent à l’ouverture maximale du système postal moderne, qui d’universel devient dépendant et partie intégrante des réseaux télématiques d’envoi. Par elles sont annoncés, en effet, la vitesse de transmission de l’information, la brièveté de la parole et l’hégémonie de la dimension imagétique du message, et pour finir, l’embarras provoqué par la fusion de l’espace public avec l’espace privé.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently there has been a great deal of work on noncommutative algebraic cryptography. This involves the use of noncommutative algebraic objects as the platforms for encryption systems. Most of this work, such as the Anshel-Anshel-Goldfeld scheme, the Ko-Lee scheme and the Baumslag-Fine-Xu Modular group scheme use nonabelian groups as the basic algebraic object. Some of these encryption methods have been successful and some have been broken. It has been suggested that at this point further pure group theoretic research, with an eye towards cryptographic applications, is necessary.In the present study we attempt to extend the class of noncommutative algebraic objects to be used in cryptography. In particular we explore several different methods to use a formal power series ring R && x1; :::; xn && in noncommuting variables x1; :::; xn as a base to develop cryptosystems. Although R can be any ring we have in mind formal power series rings over the rationals Q. We use in particular a result of Magnus that a finitely generated free group F has a faithful representation in a quotient of the formal power series ring in noncommuting variables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Procamallanus petterae n. sp. from Plecostomus albopunctarus and Spirocamallanus pintoi n. sp. from Corydoras paleatus are described. procamallanus petterae n. sp. differs from all other species of the genus by having a buccal capsule without spiral bands, with five teeth-like structures on its base and four plate-like structures near the anterior margin; length ratio of oesophagus muscular/glandular 1:1.4; spicules short, 21µ m and 16µ m long and tails ending abruptly in a sharp point, in both sexes. Spirocamallanus pintoi n. sp. is characterized by having 6 to 8 spiral thickenings in the buccal capsule of male and 9 to 10 in female, occupying 2/3 of the length of the capsule; length of glandular oesophagus more than twice the muscular; spicules short, the right 94µ m and the left 82µ m long.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article provides a fresh methodological and empirical approach for assessing price level convergence and its relation to purchasing power parity (PPP) using annual price data for seventeen US cities. We suggest a new procedure that can handle a wide range of PPP concepts in the presence of multiple structural breaks using all possible pairs of real exchange rates. To deal with cross-sectional dependence, we use both cross-sectional demeaned data and a parametric bootstrap approach. In general, we find more evidence for stationarity when the parity restriction is not imposed, while imposing parity restriction provides leads toward the rejection of the panel stationar- ity. Our results can be embedded on the view of the Balassa-Samuelson approach, but where the slope of the time trend is allowed to change in the long-run. The median half-life point estimate are found to be lower than the consensus view regardless of the parity restriction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been recently found that a number of systems displaying crackling noise also show a remarkable behavior regarding the temporal occurrence of successive events versus their size: a scaling law for the probability distributions of waiting times as a function of a minimum size is fulfilled, signaling the existence on those systems of self-similarity in time-size. This property is also present in some non-crackling systems. Here, the uncommon character of the scaling law is illustrated with simple marked renewal processes, built by definition with no correlations. Whereas processes with a finite mean waiting time do not fulfill a scaling law in general and tend towards a Poisson process in the limit of very high sizes, processes without a finite mean tend to another class of distributions, characterized by double power-law waiting-time densities. This is somehow reminiscent of the generalized central limit theorem. A model with short-range correlations is not able to escape from the attraction of those limit distributions. A discussion on open problems in the modeling of these properties is provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé Le μ-calcul est une extension de la logique modale par des opérateurs de point fixe. Dans ce travail nous étudions la complexité de certains fragments de cette logique selon deux points de vue, différents mais étroitement liés: l'un syntaxique (ou combinatoire) et l'autre topologique. Du point de vue syn¬taxique, les propriétés définissables dans ce formalisme sont classifiées selon la complexité combinatoire des formules de cette logique, c'est-à-dire selon le nombre d'alternances des opérateurs de point fixe. Comparer deux ensembles de modèles revient ainsi à comparer la complexité syntaxique des formules as¬sociées. Du point de vue topologique, les propriétés définissables dans cette logique sont comparées à l'aide de réductions continues ou selon leurs positions dans la hiérarchie de Borel ou dans celle projective. Dans la première partie de ce travail nous adoptons le point de vue syntax¬ique afin d'étudier le comportement du μ-calcul sur des classes restreintes de modèles. En particulier nous montrons que: (1) sur la classe des modèles symétriques et transitifs le μ-calcul est aussi expressif que la logique modale; (2) sur la classe des modèles transitifs, toute propriété définissable par une formule du μ-calcul est définissable par une formule sans alternance de points fixes, (3) sur la classe des modèles réflexifs, il y a pour tout η une propriété qui ne peut être définie que par une formule du μ-calcul ayant au moins η alternances de points fixes, (4) sur la classe des modèles bien fondés et transitifs le μ-calcul est aussi expressif que la logique modale. Le fait que le μ-calcul soit aussi expressif que la logique modale sur la classe des modèles bien fondés et transitifs est bien connu. Ce résultat est en ef¬fet la conséquence d'un théorème de point fixe prouvé indépendamment par De Jongh et Sambin au milieu des années 70. La preuve que nous donnons de l'effondrement de l'expressivité du μ-calcul sur cette classe de modèles est néanmoins indépendante de ce résultat. Par la suite, nous étendons le langage du μ-calcul en permettant aux opérateurs de point fixe de lier des occurrences négatives de variables libres. En montrant alors que ce formalisme est aussi ex¬pressif que le fragment modal, nous sommes en mesure de fournir une nouvelle preuve du théorème d'unicité des point fixes de Bernardi, De Jongh et Sambin et une preuve constructive du théorème d'existence de De Jongh et Sambin. RÉSUMÉ Pour ce qui concerne les modèles transitifs, du point de vue topologique cette fois, nous prouvons que la logique modale correspond au fragment borélien du μ-calcul sur cette classe des systèmes de transition. Autrement dit, nous vérifions que toute propriété définissable des modèles transitifs qui, du point de vue topologique, est une propriété borélienne, est nécessairement une propriété modale, et inversement. Cette caractérisation du fragment modal découle du fait que nous sommes en mesure de montrer que, modulo EF-bisimulation, un ensemble d'arbres est définissable dans la logique temporelle Ε F si et seulement il est borélien. Puisqu'il est possible de montrer que ces deux propriétés coïncident avec une caractérisation effective de la définissabilité dans la logique Ε F dans le cas des arbres à branchement fini donnée par Bojanczyk et Idziaszek [24], nous obtenons comme corollaire leur décidabilité. Dans une deuxième partie, nous étudions la complexité topologique d'un sous-fragment du fragment sans alternance de points fixes du μ-calcul. Nous montrons qu'un ensemble d'arbres est définissable par une formule de ce frag¬ment ayant au moins η alternances si et seulement si cette propriété se trouve au moins au n-ième niveau de la hiérarchie de Borel. Autrement dit, nous vérifions que pour ce fragment du μ-calcul, les points de vue topologique et combina- toire coïncident. De plus, nous décrivons une procédure effective capable de calculer pour toute propriété définissable dans ce langage sa position dans la hiérarchie de Borel, et donc le nombre d'alternances de points fixes nécessaires à la définir. Nous nous intéressons ensuite à la classification des ensembles d'arbres par réduction continue, et donnons une description effective de l'ordre de Wadge de la classe des ensembles d'arbres définissables dans le formalisme considéré. En particulier, la hiérarchie que nous obtenons a une hauteur (ωω)ω. Nous complétons ces résultats en décrivant un algorithme permettant de calculer la position dans cette hiérarchie de toute propriété définissable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our purpose in this article is to define a network structure which is based on two egos instead of the egocentered (one ego) or the complete network (n egos). We describe the characteristics and properties for this kind of network which we call “nosduocentered network”, comparing it with complete and egocentered networks. The key point for this kind of network is that relations exist between the two main egos and all alters, but relations among others are not observed. After that, we use new social network measures adapted to the nosduocentered network, some of which are based on measures for complete networks such as degree, betweenness, closeness centrality or density, while some others are tailormade for nosduocentered networks. We specify three regression models to predict research performance of PhD students based on these social network measures for different networks such as advice, collaboration, emotional support and trust. Data used are from Slovenian PhD students and their s

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Studies of diffuse large B-cell lymphoma (DLBCL) are typically evaluated by using a time-to-event approach with relapse, re-treatment, and death commonly used as the events. We evaluated the timing and type of events in newly diagnosed DLBCL and compared patient outcome with reference population data. PATIENTS AND METHODS: Patients with newly diagnosed DLBCL treated with immunochemotherapy were prospectively enrolled onto the University of Iowa/Mayo Clinic Specialized Program of Research Excellence Molecular Epidemiology Resource (MER) and the North Central Cancer Treatment Group NCCTG-N0489 clinical trial from 2002 to 2009. Patient outcomes were evaluated at diagnosis and in the subsets of patients achieving event-free status at 12 months (EFS12) and 24 months (EFS24) from diagnosis. Overall survival was compared with age- and sex-matched population data. Results were replicated in an external validation cohort from the Groupe d'Etude des Lymphomes de l'Adulte (GELA) Lymphome Non Hodgkinien 2003 (LNH2003) program and a registry based in Lyon, France. RESULTS: In all, 767 patients with newly diagnosed DLBCL who had a median age of 63 years were enrolled onto the MER and NCCTG studies. At a median follow-up of 60 months (range, 8 to 116 months), 299 patients had an event and 210 patients had died. Patients achieving EFS24 had an overall survival equivalent to that of the age- and sex-matched general population (standardized mortality ratio [SMR], 1.18; P = .25). This result was confirmed in 820 patients from the GELA study and registry in Lyon (SMR, 1.09; P = .71). Simulation studies showed that EFS24 has comparable power to continuous EFS when evaluating clinical trials in DLBCL. CONCLUSION: Patients with DLBCL who achieve EFS24 have a subsequent overall survival equivalent to that of the age- and sex-matched general population. EFS24 will be useful in patient counseling and should be considered as an end point for future studies of newly diagnosed DLBCL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to survey the techniques and methods described in literature to analyse and characterise voltage sags and the corresponding objectives of these works. The study has been performed from a data mining point of view

Relevância:

30.00% 30.00%

Publicador:

Resumo:

HIV virulence, i.e. the time of progression to AIDS, varies greatly among patients. As for other rapidly evolving pathogens of humans, it is difficult to know if this variance is controlled by the genotype of the host or that of the virus because the transmission chain is usually unknown. We apply the phylogenetic comparative approach (PCA) to estimate the heritability of a trait from one infection to the next, which indicates the control of the virus genotype over this trait. The idea is to use viral RNA sequences obtained from patients infected by HIV-1 subtype B to build a phylogeny, which approximately reflects the transmission chain. Heritability is measured statistically as the propensity for patients close in the phylogeny to exhibit similar infection trait values. The approach reveals that up to half of the variance in set-point viral load, a trait associated with virulence, can be heritable. Our estimate is significant and robust to noise in the phylogeny. We also check for the consistency of our approach by showing that a trait related to drug resistance is almost entirely heritable. Finally, we show the importance of taking into account the transmission chain when estimating correlations between infection traits. The fact that HIV virulence is, at least partially, heritable from one infection to the next has clinical and epidemiological implications. The difference between earlier studies and ours comes from the quality of our dataset and from the power of the PCA, which can be applied to large datasets and accounts for within-host evolution. The PCA opens new perspectives for approaches linking clinical data and evolutionary biology because it can be extended to study other traits or other infectious diseases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SummaryDiscrete data arise in various research fields, typically when the observations are count data.I propose a robust and efficient parametric procedure for estimation of discrete distributions. The estimation is done in two phases. First, a very robust, but possibly inefficient, estimate of the model parameters is computed and used to indentify outliers. Then the outliers are either removed from the sample or given low weights, and a weighted maximum likelihood estimate (WML) is computed.The weights are determined via an adaptive process such that if the data follow the model, then asymptotically no observation is downweighted.I prove that the final estimator inherits the breakdown point of the initial one, and that its influence function at the model is the same as the influence function of the maximum likelihood estimator, which strongly suggests that it is asymptotically fully efficient.The initial estimator is a minimum disparity estimator (MDE). MDEs can be shown to have full asymptotic efficiency, and some MDEs have very high breakdown points and very low bias under contamination. Several initial estimators are considered, and the performances of the WMLs based on each of them are studied.It results that in a great variety of situations the WML substantially improves the initial estimator, both in terms of finite sample mean square error and in terms of bias under contamination. Besides, the performances of the WML are rather stable under a change of the MDE even if the MDEs have very different behaviors.Two examples of application of the WML to real data are considered. In both of them, the necessity for a robust estimator is clear: the maximum likelihood estimator is badly corrupted by the presence of a few outliers.This procedure is particularly natural in the discrete distribution setting, but could be extended to the continuous case, for which a possible procedure is sketched.RésuméLes données discrètes sont présentes dans différents domaines de recherche, en particulier lorsque les observations sont des comptages.Je propose une méthode paramétrique robuste et efficace pour l'estimation de distributions discrètes. L'estimation est faite en deux phases. Tout d'abord, un estimateur très robuste des paramètres du modèle est calculé, et utilisé pour la détection des données aberrantes (outliers). Cet estimateur n'est pas nécessairement efficace. Ensuite, soit les outliers sont retirés de l'échantillon, soit des faibles poids leur sont attribués, et un estimateur du maximum de vraisemblance pondéré (WML) est calculé.Les poids sont déterminés via un processus adaptif, tel qu'asymptotiquement, si les données suivent le modèle, aucune observation n'est dépondérée.Je prouve que le point de rupture de l'estimateur final est au moins aussi élevé que celui de l'estimateur initial, et que sa fonction d'influence au modèle est la même que celle du maximum de vraisemblance, ce qui suggère que cet estimateur est pleinement efficace asymptotiquement.L'estimateur initial est un estimateur de disparité minimale (MDE). Les MDE sont asymptotiquement pleinement efficaces, et certains d'entre eux ont un point de rupture très élevé et un très faible biais sous contamination. J'étudie les performances du WML basé sur différents MDEs.Le résultat est que dans une grande variété de situations le WML améliore largement les performances de l'estimateur initial, autant en terme du carré moyen de l'erreur que du biais sous contamination. De plus, les performances du WML restent assez stables lorsqu'on change l'estimateur initial, même si les différents MDEs ont des comportements très différents.Je considère deux exemples d'application du WML à des données réelles, où la nécessité d'un estimateur robuste est manifeste : l'estimateur du maximum de vraisemblance est fortement corrompu par la présence de quelques outliers.La méthode proposée est particulièrement naturelle dans le cadre des distributions discrètes, mais pourrait être étendue au cas continu.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the effect of strong heterogeneities on the fracture of disordered materials using a fiber bundle model. The bundle is composed of two subsets of fibers, i.e. a fraction 0 ≤ α ≤ 1 of fibers is unbreakable, while the remaining 1 - α fraction is characterized by a distribution of breaking thresholds. Assuming global load sharing, we show analytically that there exists a critical fraction of the components αc which separates two qualitatively diferent regimes of the system: below αc the burst size distribution is a power law with the usual exponent Ƭ= 5/2, while above αc the exponent switches to a lower value Ƭ = 9/4 and a cutoff function occurs with a diverging characteristic size. Analyzing the macroscopic response of the system we demonstrate that the transition is conditioned to disorder distributions where the constitutive curve has a single maximum and an inflexion point defining a novel universality class of breakdown phenomena

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The analysis of the multiantenna capacity in the high-SNR regime has hitherto focused on the high-SNR slope (or maximum multiplexing gain), which quantifies the multiplicative increase as function of the number of antennas. This traditional characterization is unable to assess the impact of prominent channel features since, for a majority of channels, the slope equals the minimum of the number of transmit and receive antennas. Furthermore, a characterization based solely on the slope captures only the scaling but it has no notion of the power required for a certain capacity. This paper advocates a more refined characterization whereby, as function of SNRjdB, the high-SNR capacity is expanded as an affine function where the impact of channel features such as antenna correlation, unfaded components, etc, resides in the zero-order term or power offset. The power offset, for which we find insightful closed-form expressions, is shown to play a chief role for SNR levels of practical interest.