929 resultados para Maximum Power Point Tracking (MPPT)
Resumo:
An energy harvesting system requires an energy storing device to store the energy retrieved from the surrounding environment. This can either be a rechargeable battery or a supercapcitor. Due to the limited lifetime of rechargeable batteries, they need to be periodically replaced. Therefore, a supercapacitor, which has ideally a limitless number of charge/discharge cycles can be used to store the energy; however, a voltage regulator is required to obtain a constant output voltage as the supercapacitor discharges. This can be implemented by a Switched-Capacitor DC-DC converter which allows a complete integration in CMOS technology, although it requires several topologies in order to obtain a high efficiency. This thesis presents the complete analysis of four different topologies in order to determine expressions that allow to design and determine the optimum input voltage ranges for each topology. To better understand the parasitic effects, the implementation of the capacitors and the non-ideal effect of the switches, in 130 nm technology, were carefully studied. With these two analysis a multi-ratio SC DC-DC converter was designed with an output power of 2 mW, maximum efficiency of 77%, and a maximum output ripple, in the steady state, of 23 mV; for an input voltage swing of 2.3 V to 0.85 V. This proposed converter has four operation states that perform the conversion ratios of 1/2, 2/3, 1/1 and 3/2 and its clock frequency is automatically adjusted to produce a stable output voltage of 1 V. These features are implemented through two distinct controller circuits that use asynchronous time machines (ASM) to dynamically adjust the clock frequency and to select the active state of the converter. All the theoretical expressions as well as the behaviour of the whole system was verified using electrical simulations.
Resumo:
Purpose: To evaluate changes in anterior corneal topography and higher-order aberrations (HOA) after 14-days of rigid gas-permeable (RGP) contact lens (CL) wear in keratoconus subjects comparing two different fitting approaches. Methods: Thirty-one keratoconus subjects (50 eyes) without previous history of CL wear were recruited for the study. Subjects were randomly fitted to either an apical-touch or three-pointtouch fitting approach. The lens’ back optic zone radius (BOZR) was 0.4 mm and 0.1 mm flatter than the first definite apical clearance lens, respectively. Differences between the baseline and post-CL wear for steepest, flattest and average corneal power (ACP) readings, central corneal astigmatism (CCA), maximum tangential curvature (KTag), anterior corneal surface asphericity, anterior corneal surface HOA and thinnest corneal thickness measured with Pentacam were compared. Results: A statistically significant flattening was found over time on the flattest and steepest simulated keratometry and ACP in apical-touch group (all p < 0.01). A statistically significant reduction in KTag was found in both groups after contact lens wear (all p < 0.05). Significant reduction was found over time in CCA (p = 0.001) and anterior corneal asphericity in both groups (p < 0.001). Thickness at the thinnest corneal point increased significantly after CL wear (p < 0.0001). Coma-like and total HOA root mean square (RMS) error were significantly reduced following CL wearing in both fitting approaches (all p < 0.05). Conclusion: Short-term rigid gas-permeable CL wear flattens the anterior cornea, increases the thinnest corneal thickness and reduces anterior surface HOA in keratoconus subjects. Apicaltouch was associated with greater corneal flattening in comparison to three-point-touch lens wear.
Resumo:
PURPOSE: The aim of this work was to study the central and peripheral thickness of several contact lenses (CL) with different powers and analyze how thickness variation affects CL oxygen transmissibility. METHODS: Four daily disposable and five monthly or biweekly CL were studied. The powers of each CL were: the maximum negative power of each brand; -6.00 D; -3.00 D; zero power (-0.25 D or -0.50 D), +3.00 D and +6.00 D. Central and peripheral thicknesses were measured with an electronic thickness gauge. Each lens was measured five times (central and 3mm paracentral) and the mean value was considered. Using the values of oxygen permeability given by the manufacturers and the measured thicknesses, the variation of oxygen transmissibility with lens power was determined. RESULTS: For monthly or biweekly lenses, central thickness changed between 0.061 ± 0.002 mm and 0.243 ± 0.002 mm, and peripheral thickness varied between 0.084 ± 0.002 mm and 0.231 ± 0.015 mm. Daily disposable lenses showed central values ranging between 0.056 ± 0.0016 mm and 0.205 ± 0.002 mm and peripheral values between 0.108 ± 0.05 and 0.232 ± 0.011 mm. Oxygen transmissibility (in units) of monthly or biweekly CL ranged between 39.4 ± 0.3 and 246.0 ± 14.4 and for daily disposable lenses the values range between 9.5 ± 0.5 and 178.1 ± 5.1. CONCLUSIONS: The central and peripheral thicknesses change significantly when considering the CL power and this has a significant impact on the oxygen transmissibility. Eyecare practitioners must have this fact in account when high power plus or minus lenses are fitted or when continuous wear is considered.
Resumo:
La carte postale est un kaléidoscope de vues, d’ornements et de couleurs, qui consacre un tout petit espace au message. C’est à la photographie et aux procédés de reproduction photomécaniques que revient le mérite d’avoir industrialisé la production de la carte postale. Et ce sont les clichés de villes, avec leurs monuments et leurs paysages, qui confèrent à la carte postale son statut de moyen de communication de masse et qui lui concèdent une affinité avec l’industrie du tourisme. La carte postale s’est ainsi emparée de l’ambition photographique de reproduire le monde, s’alliant aux « besoins de l’exploration, des expéditions et des relevés topographiques » du médium photographique à ses débuts. Ayant comme point de départ la carte postale, notre objectif est de montrer les conséquences culturelles de la révolution optique, commencée au milieu du XIXe siècle, avec l’invention de l’appareil photo, et consumée dans la seconde moitié du XXe siècle, avec l’apparition de l’ordinateur. En effet, depuis l’apparition de l’appareil photographique et des cartes postales jusqu’au flux de pixels de Google Images et aux images satellite de Google Earth, un entrelacement de territoire, puissance et technique a été mis en oeuvre, la terre devenant, en conséquence, de plus en plus auscultée par les appareils de vision, ce qui impacte sur la perception de l’espace. Nous espérons pouvoir montrer avec cette étude que la lettre traditionnelle est à l’email ce que la carte postale est au post que l’on publie dans un blog ou dans des réseaux comme Facebook et Twitter. À notre sens, les cartes postales correspondent à l’ouverture maximale du système postal moderne, qui d’universel devient dépendant et partie intégrante des réseaux télématiques d’envoi. Par elles sont annoncés, en effet, la vitesse de transmission de l’information, la brièveté de la parole et l’hégémonie de la dimension imagétique du message, et pour finir, l’embarras provoqué par la fusion de l’espace public avec l’espace privé.
Resumo:
Dissertação de mestrado em Média Interativos
Resumo:
Recently there has been a great deal of work on noncommutative algebraic cryptography. This involves the use of noncommutative algebraic objects as the platforms for encryption systems. Most of this work, such as the Anshel-Anshel-Goldfeld scheme, the Ko-Lee scheme and the Baumslag-Fine-Xu Modular group scheme use nonabelian groups as the basic algebraic object. Some of these encryption methods have been successful and some have been broken. It has been suggested that at this point further pure group theoretic research, with an eye towards cryptographic applications, is necessary.In the present study we attempt to extend the class of noncommutative algebraic objects to be used in cryptography. In particular we explore several different methods to use a formal power series ring R && x1; :::; xn && in noncommuting variables x1; :::; xn as a base to develop cryptosystems. Although R can be any ring we have in mind formal power series rings over the rationals Q. We use in particular a result of Magnus that a finitely generated free group F has a faithful representation in a quotient of the formal power series ring in noncommuting variables.
Resumo:
Procamallanus petterae n. sp. from Plecostomus albopunctarus and Spirocamallanus pintoi n. sp. from Corydoras paleatus are described. procamallanus petterae n. sp. differs from all other species of the genus by having a buccal capsule without spiral bands, with five teeth-like structures on its base and four plate-like structures near the anterior margin; length ratio of oesophagus muscular/glandular 1:1.4; spicules short, 21µ m and 16µ m long and tails ending abruptly in a sharp point, in both sexes. Spirocamallanus pintoi n. sp. is characterized by having 6 to 8 spiral thickenings in the buccal capsule of male and 9 to 10 in female, occupying 2/3 of the length of the capsule; length of glandular oesophagus more than twice the muscular; spicules short, the right 94µ m and the left 82µ m long.
Resumo:
This article provides a fresh methodological and empirical approach for assessing price level convergence and its relation to purchasing power parity (PPP) using annual price data for seventeen US cities. We suggest a new procedure that can handle a wide range of PPP concepts in the presence of multiple structural breaks using all possible pairs of real exchange rates. To deal with cross-sectional dependence, we use both cross-sectional demeaned data and a parametric bootstrap approach. In general, we find more evidence for stationarity when the parity restriction is not imposed, while imposing parity restriction provides leads toward the rejection of the panel stationar- ity. Our results can be embedded on the view of the Balassa-Samuelson approach, but where the slope of the time trend is allowed to change in the long-run. The median half-life point estimate are found to be lower than the consensus view regardless of the parity restriction.
Resumo:
It has been recently found that a number of systems displaying crackling noise also show a remarkable behavior regarding the temporal occurrence of successive events versus their size: a scaling law for the probability distributions of waiting times as a function of a minimum size is fulfilled, signaling the existence on those systems of self-similarity in time-size. This property is also present in some non-crackling systems. Here, the uncommon character of the scaling law is illustrated with simple marked renewal processes, built by definition with no correlations. Whereas processes with a finite mean waiting time do not fulfill a scaling law in general and tend towards a Poisson process in the limit of very high sizes, processes without a finite mean tend to another class of distributions, characterized by double power-law waiting-time densities. This is somehow reminiscent of the generalized central limit theorem. A model with short-range correlations is not able to escape from the attraction of those limit distributions. A discussion on open problems in the modeling of these properties is provided.
Resumo:
Résumé Le μ-calcul est une extension de la logique modale par des opérateurs de point fixe. Dans ce travail nous étudions la complexité de certains fragments de cette logique selon deux points de vue, différents mais étroitement liés: l'un syntaxique (ou combinatoire) et l'autre topologique. Du point de vue syn¬taxique, les propriétés définissables dans ce formalisme sont classifiées selon la complexité combinatoire des formules de cette logique, c'est-à-dire selon le nombre d'alternances des opérateurs de point fixe. Comparer deux ensembles de modèles revient ainsi à comparer la complexité syntaxique des formules as¬sociées. Du point de vue topologique, les propriétés définissables dans cette logique sont comparées à l'aide de réductions continues ou selon leurs positions dans la hiérarchie de Borel ou dans celle projective. Dans la première partie de ce travail nous adoptons le point de vue syntax¬ique afin d'étudier le comportement du μ-calcul sur des classes restreintes de modèles. En particulier nous montrons que: (1) sur la classe des modèles symétriques et transitifs le μ-calcul est aussi expressif que la logique modale; (2) sur la classe des modèles transitifs, toute propriété définissable par une formule du μ-calcul est définissable par une formule sans alternance de points fixes, (3) sur la classe des modèles réflexifs, il y a pour tout η une propriété qui ne peut être définie que par une formule du μ-calcul ayant au moins η alternances de points fixes, (4) sur la classe des modèles bien fondés et transitifs le μ-calcul est aussi expressif que la logique modale. Le fait que le μ-calcul soit aussi expressif que la logique modale sur la classe des modèles bien fondés et transitifs est bien connu. Ce résultat est en ef¬fet la conséquence d'un théorème de point fixe prouvé indépendamment par De Jongh et Sambin au milieu des années 70. La preuve que nous donnons de l'effondrement de l'expressivité du μ-calcul sur cette classe de modèles est néanmoins indépendante de ce résultat. Par la suite, nous étendons le langage du μ-calcul en permettant aux opérateurs de point fixe de lier des occurrences négatives de variables libres. En montrant alors que ce formalisme est aussi ex¬pressif que le fragment modal, nous sommes en mesure de fournir une nouvelle preuve du théorème d'unicité des point fixes de Bernardi, De Jongh et Sambin et une preuve constructive du théorème d'existence de De Jongh et Sambin. RÉSUMÉ Pour ce qui concerne les modèles transitifs, du point de vue topologique cette fois, nous prouvons que la logique modale correspond au fragment borélien du μ-calcul sur cette classe des systèmes de transition. Autrement dit, nous vérifions que toute propriété définissable des modèles transitifs qui, du point de vue topologique, est une propriété borélienne, est nécessairement une propriété modale, et inversement. Cette caractérisation du fragment modal découle du fait que nous sommes en mesure de montrer que, modulo EF-bisimulation, un ensemble d'arbres est définissable dans la logique temporelle Ε F si et seulement il est borélien. Puisqu'il est possible de montrer que ces deux propriétés coïncident avec une caractérisation effective de la définissabilité dans la logique Ε F dans le cas des arbres à branchement fini donnée par Bojanczyk et Idziaszek [24], nous obtenons comme corollaire leur décidabilité. Dans une deuxième partie, nous étudions la complexité topologique d'un sous-fragment du fragment sans alternance de points fixes du μ-calcul. Nous montrons qu'un ensemble d'arbres est définissable par une formule de ce frag¬ment ayant au moins η alternances si et seulement si cette propriété se trouve au moins au n-ième niveau de la hiérarchie de Borel. Autrement dit, nous vérifions que pour ce fragment du μ-calcul, les points de vue topologique et combina- toire coïncident. De plus, nous décrivons une procédure effective capable de calculer pour toute propriété définissable dans ce langage sa position dans la hiérarchie de Borel, et donc le nombre d'alternances de points fixes nécessaires à la définir. Nous nous intéressons ensuite à la classification des ensembles d'arbres par réduction continue, et donnons une description effective de l'ordre de Wadge de la classe des ensembles d'arbres définissables dans le formalisme considéré. En particulier, la hiérarchie que nous obtenons a une hauteur (ωω)ω. Nous complétons ces résultats en décrivant un algorithme permettant de calculer la position dans cette hiérarchie de toute propriété définissable.
Resumo:
Our purpose in this article is to define a network structure which is based on two egos instead of the egocentered (one ego) or the complete network (n egos). We describe the characteristics and properties for this kind of network which we call “nosduocentered network”, comparing it with complete and egocentered networks. The key point for this kind of network is that relations exist between the two main egos and all alters, but relations among others are not observed. After that, we use new social network measures adapted to the nosduocentered network, some of which are based on measures for complete networks such as degree, betweenness, closeness centrality or density, while some others are tailormade for nosduocentered networks. We specify three regression models to predict research performance of PhD students based on these social network measures for different networks such as advice, collaboration, emotional support and trust. Data used are from Slovenian PhD students and their s
Resumo:
Diffusion tensor magnetic resonance imaging, which measures directional information of water diffusion in the brain, has emerged as a powerful tool for human brain studies. In this paper, we introduce a new Monte Carlo-based fiber tracking approach to estimate brain connectivity. One of the main characteristics of this approach is that all parameters of the algorithm are automatically determined at each point using the entropy of the eigenvalues of the diffusion tensor. Experimental results show the good performance of the proposed approach
Resumo:
PURPOSE: Studies of diffuse large B-cell lymphoma (DLBCL) are typically evaluated by using a time-to-event approach with relapse, re-treatment, and death commonly used as the events. We evaluated the timing and type of events in newly diagnosed DLBCL and compared patient outcome with reference population data. PATIENTS AND METHODS: Patients with newly diagnosed DLBCL treated with immunochemotherapy were prospectively enrolled onto the University of Iowa/Mayo Clinic Specialized Program of Research Excellence Molecular Epidemiology Resource (MER) and the North Central Cancer Treatment Group NCCTG-N0489 clinical trial from 2002 to 2009. Patient outcomes were evaluated at diagnosis and in the subsets of patients achieving event-free status at 12 months (EFS12) and 24 months (EFS24) from diagnosis. Overall survival was compared with age- and sex-matched population data. Results were replicated in an external validation cohort from the Groupe d'Etude des Lymphomes de l'Adulte (GELA) Lymphome Non Hodgkinien 2003 (LNH2003) program and a registry based in Lyon, France. RESULTS: In all, 767 patients with newly diagnosed DLBCL who had a median age of 63 years were enrolled onto the MER and NCCTG studies. At a median follow-up of 60 months (range, 8 to 116 months), 299 patients had an event and 210 patients had died. Patients achieving EFS24 had an overall survival equivalent to that of the age- and sex-matched general population (standardized mortality ratio [SMR], 1.18; P = .25). This result was confirmed in 820 patients from the GELA study and registry in Lyon (SMR, 1.09; P = .71). Simulation studies showed that EFS24 has comparable power to continuous EFS when evaluating clinical trials in DLBCL. CONCLUSION: Patients with DLBCL who achieve EFS24 have a subsequent overall survival equivalent to that of the age- and sex-matched general population. EFS24 will be useful in patient counseling and should be considered as an end point for future studies of newly diagnosed DLBCL.
Resumo:
In the forensic examination of DNA mixtures, the question of how to set the total number of contributors (N) presents a topic of ongoing interest. Part of the discussion gravitates around issues of bias, in particular when assessments of the number of contributors are not made prior to considering the genotypic configuration of potential donors. Further complication may stem from the observation that, in some cases, there may be numbers of contributors that are incompatible with the set of alleles seen in the profile of a mixed crime stain, given the genotype of a potential contributor. In such situations, procedures that take a single and fixed number contributors as their output can lead to inferential impasses. Assessing the number of contributors within a probabilistic framework can help avoiding such complication. Using elements of decision theory, this paper analyses two strategies for inference on the number of contributors. One procedure is deterministic and focuses on the minimum number of contributors required to 'explain' an observed set of alleles. The other procedure is probabilistic using Bayes' theorem and provides a probability distribution for a set of numbers of contributors, based on the set of observed alleles as well as their respective rates of occurrence. The discussion concentrates on mixed stains of varying quality (i.e., different numbers of loci for which genotyping information is available). A so-called qualitative interpretation is pursued since quantitative information such as peak area and height data are not taken into account. The competing procedures are compared using a standard scoring rule that penalizes the degree of divergence between a given agreed value for N, that is the number of contributors, and the actual value taken by N. Using only modest assumptions and a discussion with reference to a casework example, this paper reports on analyses using simulation techniques and graphical models (i.e., Bayesian networks) to point out that setting the number of contributors to a mixed crime stain in probabilistic terms is, for the conditions assumed in this study, preferable to a decision policy that uses categoric assumptions about N.
Resumo:
This paper aims to survey the techniques and methods described in literature to analyse and characterise voltage sags and the corresponding objectives of these works. The study has been performed from a data mining point of view