814 resultados para Load disaggregation algorithm
Resumo:
Fetal MRI reconstruction aims at finding a high-resolution image given a small set of low-resolution images. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has considered several regularization terms s.a. Dirichlet/Laplacian energy [1], Total Variation (TV)based energies [2,3] and more recently non-local means [4]. Although TV energies are quite attractive because of their ability in edge preservation, standard explicit steepest gradient techniques have been applied to optimize fetal-based TV energies. The main contribution of this work lies in the introduction of a well-posed TV algorithm from the point of view of convex optimization. Specifically, our proposed TV optimization algorithm for fetal reconstruction is optimal w.r.t. the asymptotic and iterative convergence speeds O(1/n(2)) and O(1/root epsilon), while existing techniques are in O(1/n) and O(1/epsilon). We apply our algorithm to (1) clinical newborn data, considered as ground truth, and (2) clinical fetal acquisitions. Our algorithm compares favorably with the literature in terms of speed and accuracy.
Resumo:
This paper describes Question Waves, an algorithm that can be applied to social search protocols, such as Asknext or Sixearch. In this model, the queries are propagated through the social network, with faster propagation through more trustable acquaintances. Question Waves uses local information to make decisions and obtain an answer ranking. With Question Waves, the answers that arrive first are the most likely to be relevant, and we computed the correlation of answer relevance with the order of arrival to demonstrate this result. We obtained correlations equivalent to the heuristics that use global knowledge, such as profile similarity among users or the expertise value of an agent. Because Question Waves is compatible with the social search protocol Asknext, it is possible to stop a search when enough relevant answers have been found; additionally, stopping the search early only introduces a minimal risk of not obtaining the best possible answer. Furthermore, Question Waves does not require a re-ranking algorithm because the results arrive sorted
Resumo:
Allostatic load (AL) is a marker of physiological dysregulation which reflects exposure to chronic stress. High AL has been related to poorer health outcomes including mortality. We examine here the association of socioeconomic and lifestyle factors with AL. Additionally, we investigate the extent to which AL is genetically determined. We included 803 participants (52% women, mean age 48±16years) from a population and family-based Swiss study. We computed an AL index aggregating 14 markers from cardiovascular, metabolic, lipidic, oxidative, hypothalamus-pituitary-adrenal and inflammatory homeostatic axes. Education and occupational position were used as indicators of socioeconomic status. Marital status, stress, alcohol intake, smoking, dietary patterns and physical activity were considered as lifestyle factors. Heritability of AL was estimated by maximum likelihood. Women with a low occupational position had higher AL (low vs. high OR=3.99, 95%CI [1.22;13.05]), while the opposite was observed for men (middle vs. high OR=0.48, 95%CI [0.23;0.99]). Education tended to be inversely associated with AL in both sexes(low vs. high OR=3.54, 95%CI [1.69;7.4]/OR=1.59, 95%CI [0.88;2.90] in women/men). Heavy drinking men as well as women abstaining from alcohol had higher AL than moderate drinkers. Physical activity was protective against AL while high salt intake was related to increased AL risk. The heritability of AL was estimated to be 29.5% ±7.9%. Our results suggest that generalized physiological dysregulation, as measured by AL, is determined by both environmental and genetic factors. The genetic contribution to AL remains modest when compared to the environmental component, which explains approximately 70% of the phenotypic variance.
Resumo:
Nombreux sont les groupes de recherche qui se sont intéressés, ces dernières années, à la manière de monitorer l'entraînement des sportifs de haut niveau afin d'optimaliser le rendement de ce dernier tout en préservant la santé des athlètes. Un des problèmes cardinaux d'un entraînement sportif mal conduit est le syndrome du surentraînement. La définition du syndrome susmentionné proposée par Kreider et al. est celle qui est actuellement acceptée par le « European College of Sport Science » ainsi que par le « American College of Sports Medicine», à savoir : « An accumulation of training and/or non-training stress resulting in long-term decrement in performance capacity with or without related physiological and psychological signs and symptoms of maladaptation in which restoration of performance capacity may take several weeks or months. » « Une accumulation de stress lié, ou non, à l'entraînement, résultant en une diminution à long terme de la capacité de performance. Cette dernière est associée ou non avec des signes et des symptômes physiologiques et psychologiques d'inadaptation de l'athlète à l'entraînement. La restauration de ladite capacité de performance peut prendre plusieurs semaines ou mois. » Les recommandations actuelles, concernant le monitoring de l'entraînement et la détection précoce du syndrome du surentrainement, préconisent, entre autre, un suivi psychologique à l'aide de questionnaires (tel que le Profile of Mood State (POMS)), un suivi de la charge d'entraînement perçue par l'athlète (p.ex. avec la session rating of perceived exertion (RPE) method selon C. Foster), un suivi des performances des athlètes et des charges d'entraînement effectuées ainsi qu'un suivi des problèmes de santé (blessures et maladies). Le suivi de paramètres sanguins et hormonaux n'est pas recommandé d'une part pour des questions de coût et de faisabilité, d'autre part car la littérature scientifique n'a, jusqu'ici, pas été en mesure de dégager des évidences à ce sujet. A ce jour, peu d'études ont suivi ces paramètres de manière rigoureuse, sur une longue période et chez un nombre d'athlète important. Ceci est précisément le but de notre étude.
Resumo:
This paper proposes a pose-based algorithm to solve the full SLAM problem for an autonomous underwater vehicle (AUV), navigating in an unknown and possibly unstructured environment. The technique incorporate probabilistic scan matching with range scans gathered from a mechanical scanning imaging sonar (MSIS) and the robot dead-reckoning displacements estimated from a Doppler velocity log (DVL) and a motion reference unit (MRU). The proposed method utilizes two extended Kalman filters (EKF). The first, estimates the local path travelled by the robot while grabbing the scan as well as its uncertainty and provides position estimates for correcting the distortions that the vehicle motion produces in the acoustic images. The second is an augment state EKF that estimates and keeps the registered scans poses. The raw data from the sensors are processed and fused in-line. No priory structural information or initial pose are considered. The algorithm has been tested on an AUV guided along a 600 m path within a marina environment, showing the viability of the proposed approach
Resumo:
Image segmentation of natural scenes constitutes a major problem in machine vision. This paper presents a new proposal for the image segmentation problem which has been based on the integration of edge and region information. This approach begins by detecting the main contours of the scene which are later used to guide a concurrent set of growing processes. A previous analysis of the seed pixels permits adjustment of the homogeneity criterion to the region's characteristics during the growing process. Since the high variability of regions representing outdoor scenes makes the classical homogeneity criteria useless, a new homogeneity criterion based on clustering analysis and convex hull construction is proposed. Experimental results have proven the reliability of the proposed approach
Resumo:
Työssä pyrittiin etsimään differentiaalievoluutioalgoritmilla kaksiakseliselle, välijäähdytyksellä, välipoltolla ja rekuperaattorilla varustetulle mikrokaasuturbiinille sellaiset kompressorien painesuhteet ja rekuperaattorin rekuperaatioaste, että saavutettaisiin mandollisimman hyvä osakuormahyötysuhteen säilyvyys. Osakuormatehon säätömenetelmäksi oli valittu pyörimisnopeussäädön ja turbiinien sisääntulolämpötilan alentamisen yhdistelmä, jossa generaattorilla varustetun akselin pyörimisnopeus sekä molempien turbiinien sisääntulolämpötilat olivat toisistaan riippumatta vapaasti säädettävissä. Työssä löydettiin optimaalinen säätömenetelmien yhdistelmä, jolla saavutetaan parempi osakuormahyötysuhteen säilyvyys, kuin millään käytetyistä menetelmistä yksinään. Lisäksi havaittiin, ettei optimaalinen säätömenetelmä merkittävästi riipu koneikolle valituista suunnittelupisteen parametreista. Osakuormahyötysuhteen säilyvyyden kannalta optimaalinen koneikko ei merkittävästi poikennut suunnittelupisteen hyötysuhteen kannalta optimaalisesta.
Resumo:
We adapt the Shout and Act algorithm to Digital Objects Preservation where agents explore file systems looking for digital objects to be preserved (victims). When they find something they “shout” so that agent mates can hear it. The louder the shout, the urgent or most important the finding is. Louder shouts can also refer to closeness. We perform several experiments to show that this system works very scalably, showing that heterogeneous teams of agents outperform homogeneous ones over a wide range of tasks complexity. The target at-risk documents are MS Office documents (including an RTF file) with Excel content or in Excel format. Thus, an interesting conclusion from the experiments is that fewer heterogeneous (varying skills) agents can equal the performance of many homogeneous (combined super-skilled) agents, implying significant performance increases with lower overall cost growth. Our results impact the design of Digital Objects Preservation teams: a properly designed combination of heterogeneous teams is cheaper and more scalable when confronted with uncertain maps of digital objects that need to be preserved. A cost pyramid is proposed for engineers to use for modeling the most effective agent combinations
Resumo:
As wireless communications evolve towards heterogeneousnetworks, mobile terminals have been enabled tohandover seamlessly from one network to another. At the sametime, the continuous increase in the terminal power consumptionhas resulted in an ever-decreasing battery lifetime. To that end,the network selection is expected to play a key role on howto minimize the energy consumption, and thus to extend theterminal lifetime. Hitherto, terminals select the network thatprovides the highest received power. However, it has been provedthat this solution does not provide the highest energy efficiency.Thus, this paper proposes an energy efficient vertical handoveralgorithm that selects the most energy efficient network thatminimizes the uplink power consumption. The performance of theproposed algorithm is evaluated through extensive simulationsand it is shown to achieve high energy efficiency gains comparedto the conventional approach.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
To predict the capacity of the structure or the point which is followed by instability, calculation of the critical crack size is important. Structures usually contain several cracks but not necessarily all of these cracks lead to failure or reach the critical size. So, defining the harmful cracks or the crack size which is the most leading one to failure provides criteria for structure’s capacity at elevated temperature. The scope of this thesis was to calculate fracture parameters like stress intensity factor, the J integral and plastic and ultimate capacity of the structure to estimate critical crack size for this specific structure. Several three dimensional (3D) simulations using finite element method by Ansys program and boundary element method by Frank 3D program were carried out to calculate fracture parameters and results with the aid of laboratory tests (loaddisplacement curve, the J resistance curve and yield or ultimate stress) leaded to extract critical size of the crack. Two types of the fracture which is usually affected by temperature, Elastic and Elasti-Plastic fractures were simulated by performing several linear elastic and nonlinear elastic analyses. Geometry details of the weldment; flank angle and toe radius were also studied independently to estimate the location of crack initiation and simulate stress field in early stages of crack extension in structure. In this work also overview of the structure’s capacity in room temperature (20 ºC) was studied. Comparison of the results in different temperature (20 ºC and -40 ºC) provides a threshold of the structure’s behavior within the defined range.
Resumo:
The maximum realizable power throughput of power electronic converters may be limited or constrained by technical or economical considerations. One solution to this problemis to connect several power converter units in parallel. The parallel connection can be used to increase the current carrying capacity of the overall system beyond the ratings of individual power converter units. Thus, it is possible to use several lower-power converter units, produced in large quantities, as building blocks to construct high-power converters in a modular manner. High-power converters realized by using parallel connection are needed for example in multimegawatt wind power generation systems. Parallel connection of power converter units is also required in emerging applications such as photovoltaic and fuel cell power conversion. The parallel operation of power converter units is not, however, problem free. This is because parallel-operating units are subject to overcurrent stresses, which are caused by unequal load current sharing or currents that flow between the units. Commonly, the term ’circulatingcurrent’ is used to describe both the unequal load current sharing and the currents flowing between the units. Circulating currents, again, are caused by component tolerances and asynchronous operation of the parallel units. Parallel-operating units are also subject to stresses caused by unequal thermal stress distribution. Both of these problemscan, nevertheless, be handled with a proper circulating current control. To design an effective circulating current control system, we need information about circulating current dynamics. The dynamics of the circulating currents can be investigated by developing appropriate mathematical models. In this dissertation, circulating current models aredeveloped for two different types of parallel two-level three-phase inverter configurations. Themodels, which are developed for an arbitrary number of parallel units, provide a framework for analyzing circulating current generation mechanisms and developing circulating current control systems. In addition to developing circulating current models, modulation of parallel inverters is considered. It is illustrated that depending on the parallel inverter configuration and the modulation method applied, common-mode circulating currents may be excited as a consequence of the differential-mode circulating current control. To prevent the common-mode circulating currents that are caused by the modulation, a dual modulator method is introduced. The dual modulator basically consists of two independently operating modulators, the outputs of which eventually constitute the switching commands of the inverter. The two independently operating modulators are referred to as primary and secondary modulators. In its intended usage, the same voltage vector is fed to the primary modulators of each parallel unit, and the inputs of the secondary modulators are obtained from the circulating current controllers. To ensure that voltage commands obtained from the circulating current controllers are realizable, it must be guaranteed that the inverter is not driven into saturation by the primary modulator. The inverter saturation can be prevented by limiting the inputs of the primary and secondary modulators. Because of this, also a limitation algorithm is proposed. The operation of both the proposed dual modulator and the limitation algorithm is verified experimentally.
Resumo:
In the Russian Wholesale Market, electricity and capacity are traded separately. Capacity is a special good, the sale of which obliges suppliers to keep their generating equipment ready to produce the quantity of electricity indicated by the System Operator. The purpose of the formation of capacity trading was the maintenance of reliable and uninterrupted delivery of electricity in the wholesale market. The price of capacity reflects constant investments in construction, modernization and maintenance of power plants. So, the capacity sale creates favorable conditions to attract investments in the energy sector because it guarantees the investor that his investments will be returned.