898 resultados para Maximum Power Point Tracking algorithms


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global illumination algorithms are at the center of realistic image synthesis and account for non-trivial light transport and occlusion within scenes, such as indirect illumination, ambient occlusion, and environment lighting. Their computationally most difficult part is determining light source visibility at each visible scene point. Height fields, on the other hand, constitute an important special case of geometry and are mainly used to describe certain types of objects such as terrains and to map detailed geometry onto object surfaces. The geometry of an entire scene can also be approximated by treating the distance values of its camera projection as a screen-space height field. In order to shadow height fields from environment lights a horizon map is usually used to occlude incident light. We reduce the per-receiver time complexity of generating the horizon map on N N height fields from O(N) of the previous work to O(1) by using an algorithm that incrementally traverses the height field and reuses the information already gathered along the path of traversal. We also propose an accurate method to integrate the incident light within the limits given by the horizon map. Indirect illumination in height fields requires information about which other points are visible to each height field point. We present an algorithm to determine this intervisibility in a time complexity that matches the space complexity of the produced visibility information, which is in contrast to previous methods which scale in the height field size. As a result the amount of computation is reduced by two orders of magnitude in common use cases. Screen-space ambient obscurance methods approximate ambient obscurance from the depth bu er geometry and have been widely adopted by contemporary real-time applications. They work by sampling the screen-space geometry around each receiver point but have been previously limited to near- field effects because sampling a large radius quickly exceeds the render time budget. We present an algorithm that reduces the quadratic per-pixel complexity of previous methods to a linear complexity by line sweeping over the depth bu er and maintaining an internal representation of the processed geometry from which occluders can be efficiently queried. Another algorithm is presented to determine ambient obscurance from the entire depth bu er at each screen pixel. The algorithm scans the depth bu er in a quick pre-pass and locates important features in it, which are then used to evaluate the ambient obscurance integral accurately. We also propose an evaluation of the integral such that results within a few percent of the ray traced screen-space reference are obtained at real-time render times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uusiutuvan sähköntuotannon osuuden kasvaessa kasvaa tarve tasata sähköntuotannon ja kulutuksen vaihteluita varastoimalla sähköä. Power to Gas (PtG) - sähköenergiasta luonnonkaasua tarjoaa yhden mahdollisuuden varastoida sähköä. Sähköä käytetään veden elektrolyysiin, jossa syntynyt vety käytetään metanoinissa yhdessä hiilidioksidin kanssa muodostamaan korvaavaa luonnonkaasua. Näin syntynyttä korvaava luonnonkaasua sähköstä kutsutaan e-SNG-kaasuksi. Tässä työssä tutkitaan PtG-laitoksen investointi, käyttö- ja kunnossapitokuluja. Työssä luodaan laskentamalli, jolla lasketaan PtG-laitoksen neljälle käyttötapaukselle kannattavuuslaskelma. Käyttötapauksille lasketaan myös herkkyystarkasteluja. Kannattavuuslaskelmien perusteella päätellään PtG-laitoksen liiketoimintamahdollisuudet Suomessa. Työssä laskettujen kannattavuuslaskelmien perusteella PtG-laitoksen perustapausten liiketoimintamahdollisuudet ovat huonot. Laskettujen herkkyystarkastelujen perusteella havaittiin, että investointikulut, laitoksen ajoaika ja lisätulot hapesta ja lämmöstä ovat kannattavuuden kannalta kriittisimmät menestystekijät.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the doctoral dissertation, low-voltage direct current (LVDC) distribution system stability, supply security and power quality are evaluated by computational modelling and measurements on an LVDC research platform. Computational models for the LVDC network analysis are developed. Time-domain simulation models are implemented in the time-domain simulation environment PSCAD/EMTDC. The PSCAD/EMTDC models of the LVDC network are applied to the transient behaviour and power quality studies. The LVDC network power loss model is developed in a MATLAB environment and is capable of fast estimation of the network and component power losses. The model integrates analytical equations that describe the power loss mechanism of the network components with power flow calculations. For an LVDC network research platform, a monitoring and control software solution is developed. The solution is used to deliver measurement data for verification of the developed models and analysis of the modelling results. In the work, the power loss mechanism of the LVDC network components and its main dependencies are described. Energy loss distribution of the LVDC network components is presented. Power quality measurements and current spectra are provided and harmonic pollution on the DC network is analysed. The transient behaviour of the network is verified through time-domain simulations. DC capacitor guidelines for an LVDC power distribution network are introduced. The power loss analysis results show that one of the main optimisation targets for an LVDC power distribution network should be reduction of the no-load losses and efficiency improvement of converters at partial loads. Low-frequency spectra of the network voltages and currents are shown, and harmonic propagation is analysed. Power quality in the LVDC network point of common coupling (PCC) is discussed. Power quality standard requirements are shown to be met by the LVDC network. The network behaviour during transients is analysed by time-domain simulations. The network is shown to be transient stable during large-scale disturbances. Measurement results on the LVDC research platform proving this are presented in the work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents a novel design paradigm, called Virtual Runtime Application Partitions (VRAP), to judiciously utilize the on-chip resources. As the dark silicon era approaches, where the power considerations will allow only a fraction chip to be powered on, judicious resource management will become a key consideration in future designs. Most of the works on resource management treat only the physical components (i.e. computation, communication, and memory blocks) as resources and manipulate the component to application mapping to optimize various parameters (e.g. energy efficiency). To further enhance the optimization potential, in addition to the physical resources we propose to manipulate abstract resources (i.e. voltage/frequency operating point, the fault-tolerance strength, the degree of parallelism, and the configuration architecture). The proposed framework (i.e. VRAP) encapsulates methods, algorithms, and hardware blocks to provide each application with the abstract resources tailored to its needs. To test the efficacy of this concept, we have developed three distinct self adaptive environments: (i) Private Operating Environment (POE), (ii) Private Reliability Environment (PRE), and (iii) Private Configuration Environment (PCE) that collectively ensure that each application meets its deadlines using minimal platform resources. In this work several novel architectural enhancements, algorithms and policies are presented to realize the virtual runtime application partitions efficiently. Considering the future design trends, we have chosen Coarse Grained Reconfigurable Architectures (CGRAs) and Network on Chips (NoCs) to test the feasibility of our approach. Specifically, we have chosen Dynamically Reconfigurable Resource Array (DRRA) and McNoC as the representative CGRA and NoC platforms. The proposed techniques are compared and evaluated using a variety of quantitative experiments. Synthesis and simulation results demonstrate VRAP significantly enhances the energy and power efficiency compared to state of the art.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis the effect of focal point parameters in fiber laser welding of structural steel is studied. The goal is to establish relations between laser power, focal point diameter and focal point position with the resulting quality, weld-bead geometry and hardness of the welds. In the laboratory experiments, AB AH36 shipbuilding steel was welded in an I-butt joint configuration using IPG YLS-10000 continuous wave fiber laser. The quality of the welds produced were evaluated based on standard SFS-EN ISO 13919-1. The weld-bead geometry was defined from the weld cross-sections and Vickers hardness test was used to measure hardness's from the middle of the cross-sections. It was shown that all the studied focal point parameters have an effect on the quality, weld-bead geometry and hardness of the welds produced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our objective is to evaluate the accuracy of three algorithms in differentiating the origins of outflow tract ventricular arrhythmias (OTVAs). This study involved 110 consecutive patients with OTVAs for whom a standard 12-lead surface electrocardiogram (ECG) showed typical left bundle branch block morphology with an inferior axis. All the ECG tracings were retrospectively analyzed using the following three recently published ECG algorithms: 1) the transitional zone (TZ) index, 2) the V2 transition ratio, and 3) V2 R wave duration and R/S wave amplitude indices. Considering all patients, the V2 transition ratio had the highest sensitivity (92.3%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (93.9%). The latter finding had a maximal area under the ROC curve of 0.925. In patients with left ventricular (LV) rotation, the V2 transition ratio had the highest sensitivity (94.1%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (87.5%). The former finding had a maximal area under the ROC curve of 0.892. All three published ECG algorithms are effective in differentiating the origin of OTVAs, while the V2 transition ratio, and the V2 R wave duration and R/S wave amplitude indices are the most sensitive and specific algorithms, respectively. Amongst all of the patients, the V2 R wave duration and R/S wave amplitude algorithm had the maximal area under the ROC curve, but in patients with LV rotation the V2 transition ratio algorithm had the maximum area under the ROC curve.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The context of this study is corporate e-learning, with an explicit focus on how digital learning design can facilitate self-regulated learning (SRL). The field of e-learning is growing rapidly. An increasing number of corporations use digital technology and elearning for training their work force and customers. E-learning may offer economic benefits, as well as opportunities for interaction and communication that traditional teaching cannot provide. However, the evolving variety of digital learning contexts makes new demands on learners, requiring them to develop strategies to adapt and cope with novel learning tools. This study derives from the need to learn more about learning experiences in digital contexts in order to be able to design these properly for learning. The research question targets how the design of an e-learning course influences participants’ self-regulated learning actions and intentions. SRL involves learners’ ability to exercise agency in their learning. Micro-level SRL processes were targeted by exploring behaviour, cognition, and affect/motivation in relation to the design of the digital context. Two iterations of an e-learning course were tested on two groups of participants (N=17). However, the exploration of SRL extends beyond the educational design research perspective of comparing the effects of the changes to the course designs. The study was conducted in a laboratory with each participant individually. Multiple types of data were collected. However, the results presented in this thesis are based on screen observations (including eye tracking) and video-stimulated recall interviews. These data were integrated in order to achieve a broad perspective on SRL. The most essential change evident in the second course iteration was the addition of feedback during practice and the final test. Without feedback on actions there was an observable difference between those who were instruction-directed and those who were self-directed in manipulating the context and, thus, persisted whenever faced with problems. In the second course iteration, including the feedback, this kind of difference was not found. Feedback provided the tipping point for participants to regulate their learning by identifying their knowledge gaps and to explore the learning context in a targeted manner. Furthermore, the course content was consistently seen from a pragmatic perspective, which influenced the participants’ choice of actions, showing that real life relevance is an important need of corporate learners. This also relates to assessment and the consideration of its purpose in relation to participants’ work situation. The rigidity of the multiple choice questions, focusing on the memorisation of details, influenced the participants to adapt to an approach for surface learning. It also caused frustration in cases where the participants’ epistemic beliefs were incompatible with this kind of assessment style. Triggers of positive and negative emotions could be categorized into four levels: personal factors, instructional design of content, interface design of context, and technical solution. In summary, the key design choices for creating a positive learning experience involve feedback, flexibility, functionality, fun, and freedom. The design of the context impacts regulation of behaviour, cognition, as well as affect and motivation. The learners’ awareness of these areas of regulation in relation to learning in a specific context is their ability for design-based epistemic metareflection. I describe this metareflection as knowing how to manipulate the context behaviourally for maximum learning, being metacognitively aware of one’s learning process, and being aware of how emotions can be regulated to maintain volitional control of the learning situation. Attention needs to be paid to how the design of a digital learning context supports learners’ metareflective development as digital learners. Every digital context has its own affordances and constraints, which influence the possibilities for micro-level SRL processes. Empowering learners in developing their ability for design-based epistemic metareflection is, therefore, essential for building their digital literacy in relation to these affordances and constraints. It was evident that the implementation of e-learning in the workplace is not unproblematic and needs new ways of thinking about learning and how we create learning spaces. Digital contexts bring a new culture of learning that demands attitude change in how we value knowledge, measure it, define who owns it, and who creates it. Based on the results, I argue that digital solutions for corporate learning ought to be built as an integrated system that facilitates socio-cultural connectivism within the corporation. The focus needs to shift from designing static e-learning material to managing networks of social meaning negotiation as part of a holistic corporate learning ecology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electric energy demand has been growing constantly as the global population increases. To avoid electric energy shortage, renewable energy sources and energy conservation are emphasized all over the world. The role of power electronics in energy saving and development of renewable energy systems is significant. Power electronics is applied in wind, solar, fuel cell, and micro turbine energy systems for the energy conversion and control. The use of power electronics introduces an energy saving potential in such applications as motors, lighting, home appliances, and consumer electronics. Despite the advantages of power converters, their penetration into the market requires that they have a set of characteristics such as high reliability and power density, cost effectiveness, and low weight, which are dictated by the emerging applications. In association with the increasing requirements, the design of the power converter is becoming more complicated, and thus, a multidisciplinary approach to the modelling of the converter is required. In this doctoral dissertation, methods and models are developed for the design of a multilevel power converter and the analysis of the related electromagnetic, thermal, and reliability issues. The focus is on the design of the main circuit. The electromagnetic model of the laminated busbar system and the IGBT modules is established with the aim of minimizing the stray inductance of the commutation loops that degrade the converter power capability. The circular busbar system is proposed to achieve equal current sharing among parallel-connected devices and implemented in the non-destructive test set-up. In addition to the electromagnetic model, a thermal model of the laminated busbar system is developed based on a lumped parameter thermal model. The temperature and temperature-dependent power losses of the busbars are estimated by the proposed algorithm. The Joule losses produced by non-sinusoidal currents flowing through the busbars in the converter are estimated taking into account the skin and proximity effects, which have a strong influence on the AC resistance of the busbars. The lifetime estimation algorithm was implemented to investigate the influence of the cooling solution on the reliability of the IGBT modules. As efficient cooling solutions have a low thermal inertia, they cause excessive temperature cycling of the IGBTs. Thus, a reliability analysis is required when selecting the cooling solutions for a particular application. The control of the cooling solution based on the use of a heat flux sensor is proposed to reduce the amplitude of the temperature cycles. The developed methods and models are verified experimentally by a laboratory prototype.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is done as a part of the NEOCARBON project. The aim of NEOCARBON project is to study a fully renewable energy system utilizing Power-to-Gas or Power-to-Liquid technology for energy storage. Power-to-Gas consists of two main operations: Hydrogen production via electrolysis and methane production via methanation. Methanation requires carbon dioxide and hydrogen as a raw material. This thesis studies the potential carbon dioxide sources within Finland. The different sources are ranked using the cost and energy penalty of the carbon capture, carbon biogenity and compatibility with Power-to-Gas. It can be concluded that in Finland there exists enough CO2 point sources to provide national PtG system with sufficient amounts of carbon. Pulp and paper industry is single largest producer of biogenic CO2 in Finland. It is possible to obtain single unit capable of grid balancing operations and energy transformations via Power-to-Gas and Gas-to-Power by coupling biogas plants with biomethanation and CHP units.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Point-of-care (POC) –diagnostics is a field with rapidly growing market share. As these applications become more widely used, there is an increasing pressure to improve their performance to match the one of a central laboratory tests. Lanthanide luminescence has been widely utilized in diagnostics because of the numerous advantages gained by the utilization of time-resolved or anti-Stokes detection. So far the use of lanthanide labels in POC has been scarce due to limitations set by the instrumentation required for their detection and the shortcomings, e.g. low brightness, of these labels. Along with the advances in the research of lanthanide luminescence, and in the field of semiconductors, these materials are becoming a feasible alternative for the signal generation also in the future POC assays. The aim of this thesis was to explore ways of utilizing time-resolved detection or anti-Stokes detection in POC applications. The long-lived fluorescence for the time-resolved measurement can be produced with lanthanide chelates. The ultraviolet (UV) excitation required by these chelates is cumbersome to produce with POC compatible fluorescence readers. In this thesis the use of a novel light-harvesting ligand was studied. This molecule can be used to excite Eu(III)-ions at wavelengths extending up to visible part of the spectrum. An enhancement solution based on this ligand showed a good performance in a proof-of-concept -bioaffinity assay and produced a bright signal upon 365 nm excitation thanks to the high molar absorptivity of the chelate. These features are crucial when developing miniaturized readers for the time-resolved detection of fluorescence. Upconverting phosphors (UCPs) were studied as an internal light source in glucose-sensing dry chemistry test strips and ways of utilizing their various emission wavelengths and near-infrared excitation were explored. The use of nanosized NaYF :Yb3+,Tm3+-particles enabled the replacement of an external UV-light source with a NIR-laser and gave an additional degree of freedom in the optical setup of the detector instrument. The new method enabled a blood glucose measurement with results comparable to a current standard method of measuring reflectance. Microsized visible emitting UCPs were used in a similar manner, but with a broad absorbing indicator compound filtering the excitation and emission wavelengths of the UCP. This approach resulted in a novel way of benefitting from the non-linear relationship between the excitation power and emission intensity of the UCPs, and enabled the amplification of the signal response from the indicator dye.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Simplification of highly detailed CAD models is an important step when CAD models are visualized or by other means utilized in augmented reality applications. Without simplification, CAD models may cause severe processing and storage is- sues especially in mobile devices. In addition, simplified models may have other advantages like better visual clarity or improved reliability when used for visual pose tracking. The geometry of CAD models is invariably presented in form of a 3D mesh. In this paper, we survey mesh simplification algorithms in general and focus especially to algorithms that can be used to simplify CAD models. We test some commonly known algorithms with real world CAD data and characterize some new CAD related simplification algorithms that have not been surveyed in previous mesh simplification reviews.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fall of 2013 could be characterized as a crossroad in the geopolitics of Eastern Europe, namely Ukraine. Two rivalry geopolitical projects have been developing throughout the post-Cold War years, and it seems that they reached a collision point in Ukraine; a country whose authorities have been for long switching sides between the European Union and the Russian Federation in their foreign policy commitments. The refusal/postponing to sign the Association Agreement with Brussels, an expected event by a large category of the Ukrainian society, by Yanukovich’s government led to the outset of the latter; and brought a pro-Western, anti-Russian government in Kyiv. It seems that Ukraine, after those events, has embarked definitively on the path of integration into the West (European Union and possibly NATO). The Russian Federation, who has been throughout Putin’s years engaged into the re-integration of post-Soviet space, reacted to these developments in an assertive manner by violating borders, agreements and the territorial integrity of Ukraine. Thus, the incorporation of the Crimea into the Russian Federation is the first in its kind in the post-Soviet space, despite the existence of various other conflicts that broke out in the region after the Soviet Union broke up. I will investigate in this thesis the nature of what will be labelled, in this work, the Crimean issue. I argue that the incorporation of the Crimean peninsula into the Russian Federation marks a new era in Russian geopolitical thinking that shapes, to a far extent, Russian foreign policy. Discourse analysis will be the methodological basis for this study, with a special focus on Michel Foucault’s Archaeology of Knowledge. The innovation that this research brings is the fact that it discusses Russian geopolitical discourse within the scope of Foucault’s ‘discursive tree’, with a reference to the Crimean issue. A wide range of primary sources will be consulted in this study such as presidential addresses to the Federal Assembly (2000-2014), Foreign Policy Concepts of the Russian Federation (2000, 2008), Russian maritime doctrines, as wells as Dugin’s Osnovy Geopolitiki (Foundations of Geopolitics), Mahan’s (The Influence of Sea Power Upon History, 1660–1783) and other Eurasianism related literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Currently, laser scribing is growing material processing method in the industry. Benefits of laser scribing technology are studied for example for improving an efficiency of solar cells. Due high-quality requirement of the fast scribing process, it is important to monitor the process in real time for detecting possible defects during the process. However, there is a lack of studies of laser scribing real time monitoring. Commonly used monitoring methods developed for other laser processes such a laser welding, are sufficient slow and existed applications cannot be implemented in fast laser scribing monitoring. The aim of this thesis is to find a method for laser scribing monitoring with a high-speed camera and evaluate reliability and performance of the developed monitoring system with experiments. The laser used in experiments is an IPG ytterbium pulsed fiber laser with 20 W maximum average power and Scan head optics used in the laser is Scanlab’s Hurryscan 14 II with an f100 tele-centric lens. The camera was connected to laser scanner using camera adapter to follow the laser process. A powerful fully programmable industrial computer was chosen for executing image processing and analysis. Algorithms for defect analysis, which are based on particle analysis, were developed using LabVIEW system design software. The performance of the algorithms was analyzed by analyzing a non-moving image from the scribing line with resolution 960x20 pixel. As a result, the maximum analysis speed was 560 frames per second. Reliability of the algorithm was evaluated by imaging scribing path with a variable number of defects 2000 mm/s when the laser was turned off and image analysis speed was 430 frames per second. The experiment was successful and as a result, the algorithms detected all defects from the scribing path. The final monitoring experiment was performed during a laser process. However, it was challenging to get active laser illumination work with the laser scanner due physical dimensions of the laser lens and the scanner. For reliable error detection, the illumination system is needed to be replaced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The (n, k)-star interconnection network was proposed in 1995 as an attractive alternative to the n-star topology in parallel computation. The (n, k )-star has significant advantages over the n-star which itself was proposed as an attractive alternative to the popular hypercube. The major advantage of the (n, k )-star network is its scalability, which makes it more flexible than the n-star as an interconnection network. In this thesis, we will focus on finding graph theoretical properties of the (n, k )-star as well as developing parallel algorithms that run on this network. The basic topological properties of the (n, k )-star are first studied. These are useful since they can be used to develop efficient algorithms on this network. We then study the (n, k )-star network from algorithmic point of view. Specifically, we will investigate both fundamental and application algorithms for basic communication, prefix computation, and sorting, etc. A literature review of the state-of-the-art in relation to the (n, k )-star network as well as some open problems in this area are also provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The (n, k)-arrangement interconnection topology was first introduced in 1992. The (n, k )-arrangement graph is a class of generalized star graphs. Compared with the well known n-star, the (n, k )-arrangement graph is more flexible in degree and diameter. However, there are few algorithms designed for the (n, k)-arrangement graph up to present. In this thesis, we will focus on finding graph theoretical properties of the (n, k)- arrangement graph and developing parallel algorithms that run on this network. The topological properties of the arrangement graph are first studied. They include the cyclic properties. We then study the problems of communication: broadcasting and routing. Embedding problems are also studied later on. These are very useful to develop efficient algorithms on this network. We then study the (n, k )-arrangement network from the algorithmic point of view. Specifically, we will investigate both fundamental and application algorithms such as prefix sums computation, sorting, merging and basic geometry computation: finding convex hull on the (n, k )-arrangement graph. A literature review of the state-of-the-art in relation to the (n, k)-arrangement network is also provided, as well as some open problems in this area.