898 resultados para Maximum Power Point Tracking algorithms
Resumo:
We present a variable time step, fully adaptive in space, hybrid method for the accurate simulation of incompressible two-phase flows in the presence of surface tension in two dimensions. The method is based on the hybrid level set/front-tracking approach proposed in [H. D. Ceniceros and A. M. Roma, J. Comput. Phys., 205, 391400, 2005]. Geometric, interfacial quantities are computed from front-tracking via the immersed-boundary setting while the signed distance (level set) function, which is evaluated fast and to machine precision, is used as a fluid indicator. The surface tension force is obtained by employing the mixed Eulerian/Lagrangian representation introduced in [S. Shin, S. I. Abdel-Khalik, V. Daru and D. Juric, J. Comput. Phys., 203, 493-516, 2005] whose success for greatly reducing parasitic currents has been demonstrated. The use of our accurate fluid indicator together with effective Lagrangian marker control enhance this parasitic current reduction by several orders of magnitude. To resolve accurately and efficiently sharp gradients and salient flow features we employ dynamic, adaptive mesh refinements. This spatial adaption is used in concert with a dynamic control of the distribution of the Lagrangian nodes along the fluid interface and a variable time step, linearly implicit time integration scheme. We present numerical examples designed to test the capabilities and performance of the proposed approach as well as three applications: the long-time evolution of a fluid interface undergoing Rayleigh-Taylor instability, an example of bubble ascending dynamics, and a drop impacting on a free interface whose dynamics we compare with both existing numerical and experimental data.
Resumo:
In this paper we introduce the Weibull power series (WPS) class of distributions which is obtained by compounding Weibull and power series distributions where the compounding procedure follows same way that was previously carried out by Adamidis and Loukas (1998) This new class of distributions has as a particular case the two-parameter exponential power series (EPS) class of distributions (Chahkandi and Gawk 2009) which contains several lifetime models such as exponential geometric (Adamidis and Loukas 1998) exponential Poisson (Kus 2007) and exponential logarithmic (Tahmasbi and Rezaei 2008) distributions The hazard function of our class can be increasing decreasing and upside down bathtub shaped among others while the hazard function of an EPS distribution is only decreasing We obtain several properties of the WPS distributions such as moments order statistics estimation by maximum likelihood and inference for a large sample Furthermore the EM algorithm is also used to determine the maximum likelihood estimates of the parameters and we discuss maximum entropy characterizations under suitable constraints Special distributions are studied in some detail Applications to two real data sets are given to show the flexibility and potentiality of the new class of distributions (C) 2010 Elsevier B V All rights reserved
Resumo:
I consider the case for genuinely anonymous web searching. Big data seems to have it in for privacy. The story is well known, particularly since the dawn of the web. Vastly more personal information, monumental and quotidian, is gathered than in the pre-digital days. Once gathered it can be aggregated and analyzed to produce rich portraits, which in turn permit unnerving prediction of our future behavior. The new information can then be shared widely, limiting prospects and threatening autonomy. How should we respond? Following Nissenbaum (2011) and Brunton and Nissenbaum (2011 and 2013), I will argue that the proposed solutions—consent, anonymity as conventionally practiced, corporate best practices, and law—fail to protect us against routine surveillance of our online behavior. Brunton and Nissenbaum rightly maintain that, given the power imbalance between data holders and data subjects, obfuscation of one’s online activities is justified. Obfuscation works by generating “misleading, false, or ambiguous data with the intention of confusing an adversary or simply adding to the time or cost of separating good data from bad,” thus decreasing the value of the data collected (Brunton and Nissenbaum, 2011). The phenomenon is as old as the hills. Natural selection evidently blundered upon the tactic long ago. Take a savory butterfly whose markings mimic those of a toxic cousin. From the point of view of a would-be predator the data conveyed by the pattern is ambiguous. Is the bug lunch or potential last meal? In the light of the steep costs of a mistake, the savvy predator goes hungry. Online obfuscation works similarly, attempting for instance to disguise the surfer’s identity (Tor) or the nature of her queries (Howe and Nissenbaum 2009). Yet online obfuscation comes with significant social costs. First, it implies free riding. If I’ve installed an effective obfuscating program, I’m enjoying the benefits of an apparently free internet without paying the costs of surveillance, which are shifted entirely onto non-obfuscators. Second, it permits sketchy actors, from child pornographers to fraudsters, to operate with near impunity. Third, online merchants could plausibly claim that, when we shop online, surveillance is the price we pay for convenience. If we don’t like it, we should take our business to the local brick-and-mortar and pay with cash. Brunton and Nissenbaum have not fully addressed the last two costs. Nevertheless, I think the strict defender of online anonymity can meet these objections. Regarding the third, the future doesn’t bode well for offline shopping. Consider music and books. Intrepid shoppers can still find most of what they want in a book or record store. Soon, though, this will probably not be the case. And then there are those who, for perfectly good reasons, are sensitive about doing some of their shopping in person, perhaps because of their weight or sexual tastes. I argue that consumers should not have to pay the price of surveillance every time they want to buy that catchy new hit, that New York Times bestseller, or a sex toy.
Resumo:
This Thesis Work will concentrate on a very interesting problem, the Vehicle Routing Problem (VRP). In this problem, customers or cities have to be visited and packages have to be transported to each of them, starting from a basis point on the map. The goal is to solve the transportation problem, to be able to deliver the packages-on time for the customers,-enough package for each Customer,-using the available resources- and – of course - to be so effective as it is possible.Although this problem seems to be very easy to solve with a small number of cities or customers, it is not. In this problem the algorithm have to face with several constraints, for example opening hours, package delivery times, truck capacities, etc. This makes this problem a so called Multi Constraint Optimization Problem (MCOP). What’s more, this problem is intractable with current amount of computational power which is available for most of us. As the number of customers grow, the calculations to be done grows exponential fast, because all constraints have to be solved for each customers and it should not be forgotten that the goal is to find a solution, what is best enough, before the time for the calculation is up. This problem is introduced in the first chapter: form its basics, the Traveling Salesman Problem, using some theoretical and mathematical background it is shown, why is it so hard to optimize this problem, and although it is so hard, and there is no best algorithm known for huge number of customers, why is it a worth to deal with it. Just think about a huge transportation company with ten thousands of trucks, millions of customers: how much money could be saved if we would know the optimal path for all our packages.Although there is no best algorithm is known for this kind of optimization problems, we are trying to give an acceptable solution for it in the second and third chapter, where two algorithms are described: the Genetic Algorithm and the Simulated Annealing. Both of them are based on obtaining the processes of nature and material science. These algorithms will hardly ever be able to find the best solution for the problem, but they are able to give a very good solution in special cases within acceptable calculation time.In these chapters (2nd and 3rd) the Genetic Algorithm and Simulated Annealing is described in details, from their basis in the “real world” through their terminology and finally the basic implementation of them. The work will put a stress on the limits of these algorithms, their advantages and disadvantages, and also the comparison of them to each other.Finally, after all of these theories are shown, a simulation will be executed on an artificial environment of the VRP, with both Simulated Annealing and Genetic Algorithm. They will both solve the same problem in the same environment and are going to be compared to each other. The environment and the implementation are also described here, so as the test results obtained.Finally the possible improvements of these algorithms are discussed, and the work will try to answer the “big” question, “Which algorithm is better?”, if this question even exists.
Resumo:
The aim of this study was to investigate how electricallyheated houses can be converted to using wood pellet and solarheating. There are a large number of wood pellet stoves on themarket. Many stoves have a water jacket, which gives anopportunity to distribute the heat to domestic hot water and aradiator heating system. Three typical Swedish houses with electric resistanceheating have been studied. Fourteen different system conceptsusing wood pellet stoves and solar heating systems have beenevaluated. The systems and the houses have been simulated indetail using TRNSYS. The houses have been divided in up to 10different zones and heat transfer by air circulation throughdoorways and open doors have been simulated. The pellet stoveswere simulated using a recently developed TRNSYS component,which models the start- and stop phases, emissions and thedynamic behaviour of the stoves. The model also calculates theCO-emissions. Simulations were made with one stove without awater jacket and two stoves with different fractions of thegenerated heat distributed in the water circuit. Simulations show that the electricity savings using a pelletstove are greatly affected by the house plan, the systemchoice, if the internal doors are open or closed and thedesired level of comfort. Installing a stove with awater-jacket connected to a radiator system and a hot waterstorage has the advantage that heat can be transferred todomestic hot water and be distributed to other rooms. Suchsystems lead to greater electricity savings, especially inhouses having a traditional layout. It was found that not allrooms needed radiators and that it was more effective in mostcases t use a stove with a higher fraction of the heatdistributed by the water circuit. The economic investigation shows that installing a woodpellet stove without a water jacket gives the lowest totalenergy- and capital costs in the house with an open plan (fortoday's energy prices and the simulated comfort criteria). Inthe houses with a traditional layout a pellet stove givesslightly higher costs than the reference house having onlyelectrical resistance heating due to the fact that less heatingcan be replaced. The concepts including stoves with a waterjacket all give higher costs than the reference system, but theconcept closest to be economical is a system with a bufferstore, a stove with a high fraction of the heat distributed bythe water circuit, a new water radiator heating system and asolar collector. Losses from stoves can be divided into: flue gas lossesincluding leakage air flow when the stove is not in operation;losses during start and stop phases; and losses due to a highair factor. An increased efficiency of the stoves is importantboth from a private economical point of view, but also from theperspective that there can be a lack of bio fuel in the nearfuture also in Sweden. From this point of view it is alsoimportant to utilize as much solar heat as possible. Theutilization of solar heat is low in the simulated systems,depending on the lack of space for a large buffer store. The simulations have shown that the annual efficiency ismuch lower that the nominal efficiency at full power. Thesimulations have also shown that changing the control principlefor the stove can improve efficiency and reduce theCO-emissions. Today's most common control principle for stovesis the on/off control, which results in many starts and stopsand thereby high CO-emissions. A more advanced control varyingthe heating rate from maximum to minimum to keep a constantroom temperature reduces the number of starts and stops andthereby the emissions. Also the efficiency can be higher withsuch a control, and the room temperature will be kept at a moreconstant temperature providing a higher comfort.
Resumo:
The aim of this paper is to point out benefits as well as disadvantages associated with the use of locally available, not necessarily standardized, components in stand-alone electrical power systems at rural locations. Advantages and challenges arising when the direct involvement in design, construction and maintenance of the power system is reserved to people based in the area of implementation are discussed. The presented research is centered around one particular PV-diesel hybrid system in Tanzania; a case study in which technical and social aspects related to the particular power system are studied.
Resumo:
Photovoltaic Thermal/Hybrid collectors are an emerging technology that combines PV and solar thermal collectors by producing heat and electricity simultaneously. In this paper, the electrical performance evaluation of a low concentrating PVT collector was done through two testing parts: power comparison and performance ratio testing. For the performance ratio testing, it is required to identify and measure the factors affecting the performance ratio on a low concentrating PVT collector. Factors such as PV cell configuration, collector acceptance angle, flow rate, tracking the sun, temperature dependence and diffuse to irradiance ratio. Solarus low concentrating PVT collector V12 was tested at Dalarna University in Sweden using the electrical equipment at the solar laboratory. The PV testing has showed differences between the two receivers. Back2 was producing 1.8 energy output more than Back1 throughout the day. Front1 and Front2 were almost the same output performance. Performance tests showed that the cell configuration for Receiver2 with cells grouping (6- 32-32-6) has proved to have a better performance ratio when to it comes to minimizing the shading effect leading to more output power throughout the day because of lowering the mismatch losses. Different factors were measured and presented in this thesis in chapter 5. With the current design, it has been obtained a peak power at STC of 107W per receiver. The solar cells have an electrical efficiency of approximately 19% while the maximum measured electrical efficiency for the collector was approximately 18 % per active cell area, in addition to a temperature coefficient of -0.53%/ ˚C. Finally a recommendation was done to help Solarus AB to know how much the electrical performance is affected during variable ambient condition and be able to use the results for analyzing and introducing new modification if needed.
Resumo:
Gay activist from Uganda had applied for an Oak Fellowship at Colby; his murder should serve as a warning of the sometimes dangerous power of Americans abroad. Ellen Morris ’11 on “knowing” slain gay-rights activist David Kato.
Resumo:
The evolution of integrated circuits technologies demands the development of new CAD tools. The traditional development of digital circuits at physical level is based in library of cells. These libraries of cells offer certain predictability of the electrical behavior of the design due to the previous characterization of the cells. Besides, different versions of each cell are required in such a way that delay and power consumption characteristics are taken into account, increasing the number of cells in a library. The automatic full custom layout generation is an alternative each time more important to cell based generation approaches. This strategy implements transistors and connections according patterns defined by algorithms. So, it is possible to implement any logic function avoiding the limitations of the library of cells. Tools of analysis and estimate must offer the predictability in automatic full custom layouts. These tools must be able to work with layout estimates and to generate information related to delay, power consumption and area occupation. This work includes the research of new methods of physical synthesis and the implementation of an automatic layout generation in which the cells are generated at the moment of the layout synthesis. The research investigates different strategies of elements disposition (transistors, contacts and connections) in a layout and their effects in the area occupation and circuit delay. The presented layout strategy applies delay optimization by the integration with a gate sizing technique. This is performed in such a way the folding method allows individual discrete sizing to transistors. The main characteristics of the proposed strategy are: power supply lines between rows, over the layout routing (channel routing is not used), circuit routing performed before layout generation and layout generation targeting delay reduction by the application of the sizing technique. The possibility to implement any logic function, without restrictions imposed by a library of cells, allows the circuit synthesis with optimization in the number of the transistors. This reduction in the number of transistors decreases the delay and power consumption, mainly the static power consumption in submicrometer circuits. Comparisons between the proposed strategy and other well-known methods are presented in such a way the proposed method is validated.
Resumo:
Indexing is a passive investment strategy in which the investor weights bis portfolio to match the performance of a broad-based indexo Since severaI studies showed that indexed portfolios have consistently outperformed active management strategies over the last decades, an increasing number of investors has become interested in indexing portfolios IateIy. Brazilian financiaI institutions do not offer indexed portfolios to their clients at this point in time. In this work we propose the use of indexed portfolios to track the performance oftwo ofthe most important Brazilian stock indexes: the mOVESPA and the FGVIOO. We test the tracking performance of our modeI by a historical simulation. We applied several statistical tests to the data to verify how many stocks should be used to controI the portfolio tracking error within user specified bounds.
Resumo:
This thesis reports on research done for the integration of eye tracking technology into virtual reality environments, with the goal of using it in rehabilitation of patients who suffered from stroke. For the last few years, eye tracking has been a focus on medical research, used as an assistive tool to help people with disabilities interact with new technologies and as an assessment tool to track the eye gaze during computer interactions. However, tracking more complex gaze behaviors and relating them to motor deficits in people with disabilities is an area that has not been fully explored, therefore it became the focal point of this research. During the research, two exploratory studies were performed in which eye tracking technology was integrated in the context of a newly created virtual reality task to assess the impact of stroke. Using an eye tracking device and a custom virtual task, the system developed is able to monitor the eye gaze pattern changes over time in patients with stroke, as well as allowing their eye gaze to function as an input for the task. Based on neuroscientific hypotheses of upper limb motor control, the studies aimed at verifying the differences in gaze patterns during the observation and execution of the virtual goal-oriented task in stroke patients (N=10), and also to assess normal gaze behavior in healthy participants (N=20). Results were found consistent and supported the hypotheses formulated, showing that eye gaze could be used as a valid assessment tool on these patients. However, the findings of this first exploratory approach are limited in order to fully understand the effect of stroke on eye gaze behavior. Therefore, a novel model-driven paradigm is proposed to further understand the relation between the neuronal mechanisms underlying goal-oriented actions and eye gaze behavior.
Resumo:
Empathy is a basic facilitating element of the therapeutic helping relationship and the humanization process in health care. The objectives of this study were to identify the empathy level of health professionals working in the obstetrical sector of a university hospital recognized for its humanistic care and the perceptions of the women under their care regarding the empathic behavior shown by these professionals during hospitalization. We conducted a quanti/qualitative study with 47 health professionals that worked in the obstetrical sector (13 obstetricians, 12 nurses, 22 nurse technicians) and an intentional sample of 101 women that received cared from these professionals during the study period. We collected data by means of the Jefferson Empathy Scale for Health Professioals (JEPS-HR) and the Patient´s Perception of Health Professional Empathy (PPHPE), and two additional open questions designed to obtain the subjective opinion about the empathic behavior during the care. We utilized thematic analysis for the data obtained through the open questions and descriptive and inferential statistics for the quantitative data. We identified five thematic categories that represent the aspects valued by the professionals in their relationship with the women under their care: emotional involvement, communication, warm environment, integral vision and technical/scientific knowledge. The mean score on the JEPS-HR reported for the health professionals was 120,40, being that the maximum possible was 140.The Cronbach Alpha for the JEPS-HR was 0,83, indicating an acceptable level of reliability for this population. We consider therefore, that these professionals presented an acceptable empathy level when compared to other populations observed with the JEPS-HR. The results also indicated that women had statistically significant (p ≤ 0,05) higher scores than men and that professionals with higher working hours tended to have lower scores in the empathy scale (r = -0,288; p ≤ 0,05). The analysis of the subjective responses of the women indicated that they were satisfied with the humanistic care provided by the professionals but they also point out the existence of some power relationships. There were no significant differences in the empathy level of the medical or nursing team perceived by the women who registered means of 41,90 and 41,20 respectively on the PPHPE. In view of these results and considering the relevance of the element of empathy for care based on humanistic values, we reiterate the importance of further in-service training for the health team of the hospital in focus, on the topics of empathy and global aspects of humanized care for the implementation of its mission
Resumo:
The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developing the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. It s important to point out that, in spite of the loads being normally connected to the transformer s secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity
Resumo:
The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity
Resumo:
Markovian algorithms for estimating the global maximum or minimum of real valued functions defined on some domain Omega subset of R-d are presented. Conditions on the search schemes that preserve the asymptotic distribution are derived. Global and local search schemes satisfying these conditions are analysed and shown to yield sharper confidence intervals when compared to the i.i.d. case.