888 resultados para power to extend time
Resumo:
Nowadays, when a user is planning a touristic route is very difficult to find out which are the best places to visit. The user has to choose considering his/her preferences due to the great quantity of information it is possible to find in the web and taking into account it is necessary to do a selection, within small time because there is a limited time to do a trip. In Itiner@ project, we aim to implement Semantic Web technology combined with Geographic Information Systems in order to offer personalized touristic routes around a region based on user preferences and time situation. Using ontologies it is possible to link, structure, share data and obtain the result more suitable for user's preferences and actual situation with less time and more precisely than without ontologies. To achieve these objectives we propose a web page combining a GIS server and a touristic ontology. As a step further, we also study how to extend this technology on mobile devices due to the raising interest and technological progress of these devices and location-based services, which allows the user to have all the route information on the hand when he/she does a touristic trip. We design a little application in order to apply the combination of GIS and Semantic Web in a mobile device.
Resumo:
Durante el siglo XIX se ejecutan en Catalunya proyectos de grandes regadíos en las tierras con mejores aptitudes para ello, como son las de la Depresión Central leridana, las del delta del Ebro o las del Bajo Llobregat. La excepción será la llanura del Ampurdán, concretamente, su mitad norte. No obstante, la profusión de intentos es elevada, aunque ninguno dará resultados prácticos hasta los años 60 del siglo XX. Esto ha reforzado su desconocimiento y, mediante el artículo, se quiere paliar este déficit. En primer lugar, se repasan algunos de los viajeros y eruditos que, entre los siglos XVII y XIX, facilitaron datos sobre los riegos existentes y algunas propuestas para mejorarlos. A continuación se analizarán los tres intentos más significativos para ampliarlos durante la segunda mitad del XIX. Se abordarán sus objetivos, sus peculiaridades, sus promotores, los discursos que los justificaron y los motivos de su fracaso
Resumo:
Abstract¦This thesis examines through three essays the role of the social context and of people concern for justice in explaining workplace aggressive behaviors.¦In the first essay, I argue that a work group instrumental climate - a climate emphasizing respect of organizational procedures -deters employees to manifest counterproductive work behaviors through informal sanctions (i.e., socio-emotional disapproval) they anticipate from it for misbehaving. A contrario, a work group affective climate - a climate concerned about others' well-being - leads employees to infer less informal sanctions and thus indirectly facilitates counterproductive work behaviors. I additionally expect these indirect effects to be conditional on employees' level of conscientiousness and agreeableness. Cross-level structural equations on cross-sectional data obtained from 158 employees in 26 work groups supported my expectations. By promoting collective responsibility for the respect of organizational rules and by knowing what their work group considers threatening their well-being, leaders may be able to prevent counterproductive work behaviors.¦Adopting an organizational justice perspective, the second essay provides a theoretical explanation of why and how collective deviance can emerge in a collective. In interdependent situations, employees use justice perceptions to infer others' cooperative intent. Even if moral transgressions (e.g., injustice) are ambiguous, their repetition and configuration within a team can lead employees to assign blame and develop collective cynicism toward the transgressor. Over time, collective cynicism - a shared belief about the transgressor's intentional lack of integrity - progressively constrains the diversity of employees' response to blame and leads collective deviance to emerge. This essay contributes to workplace deviance research by offering a theoretical framework for investigations of the phenomenon at the collective level. It organizations effort to manage and prevent deviance should consider.¦In the third essay, I solve an apparent contradiction in the literature showing that justice concerns sometimes lead employees to react aggressively to injustice and sometimes to refrain from it. Drawing from just-world theory, a cross-sectional field study and an experiment provide evidence that retaliatory tendencies following injustice are moderated by personal and general just-world beliefs. Whereas a high personal just-world belief facilitates retaliatory reactions to injustice, a high general just-world belief attenuates such reactions. This essay uncovers a dark side of personal just-world belief and a bright one of general just-world belief, and participates to extend just-world theory to the working context.
Resumo:
The objective of this work was to evaluate the effect of insulin alone or in association with equine chorionic gonadotropin (eCG) on the fertility of postpartum beef cows subjected to synchronization. A total of 340 cows was subjected to fixed time artificial insemination. In the trial 1, the cows were subjected to temporary weaning (TW), while in the trial 2 the same protocol was tested without TW. The addition of an insulin injection to a progesterone/eCG-based protocol without TW increased the pregnancy rate of beef cows with body condition score (BCS) equal to or lower than 2.5. Insulin had no effect on cows submitted to TW or with BCS equal to or higher than 3.0.
Resumo:
Combinatorial optimization involves finding an optimal solution in a finite set of options; many everyday life problems are of this kind. However, the number of options grows exponentially with the size of the problem, such that an exhaustive search for the best solution is practically infeasible beyond a certain problem size. When efficient algorithms are not available, a practical approach to obtain an approximate solution to the problem at hand, is to start with an educated guess and gradually refine it until we have a good-enough solution. Roughly speaking, this is how local search heuristics work. These stochastic algorithms navigate the problem search space by iteratively turning the current solution into new candidate solutions, guiding the search towards better solutions. The search performance, therefore, depends on structural aspects of the search space, which in turn depend on the move operator being used to modify solutions. A common way to characterize the search space of a problem is through the study of its fitness landscape, a mathematical object comprising the space of all possible solutions, their value with respect to the optimization objective, and a relationship of neighborhood defined by the move operator. The landscape metaphor is used to explain the search dynamics as a sort of potential function. The concept is indeed similar to that of potential energy surfaces in physical chemistry. Borrowing ideas from that field, we propose to extend to combinatorial landscapes the notion of the inherent network formed by energy minima in energy landscapes. In our case, energy minima are the local optima of the combinatorial problem, and we explore several definitions for the network edges. At first, we perform an exhaustive sampling of local optima basins of attraction, and define weighted transitions between basins by accounting for all the possible ways of crossing the basins frontier via one random move. Then, we reduce the computational burden by only counting the chances of escaping a given basin via random kick moves that start at the local optimum. Finally, we approximate network edges from the search trajectory of simple search heuristics, mining the frequency and inter-arrival time with which the heuristic visits local optima. Through these methodologies, we build a weighted directed graph that provides a synthetic view of the whole landscape, and that we can characterize using the tools of complex networks science. We argue that the network characterization can advance our understanding of the structural and dynamical properties of hard combinatorial landscapes. We apply our approach to prototypical problems such as the Quadratic Assignment Problem, the NK model of rugged landscapes, and the Permutation Flow-shop Scheduling Problem. We show that some network metrics can differentiate problem classes, correlate with problem non-linearity, and predict problem hardness as measured from the performances of trajectory-based local search heuristics.
Resumo:
OBJECTIVES: To evaluate the performance of the INTERMED questionnaire score, alone or combined with other criteria, in predicting return to work after a multidisciplinary rehabilitation program in patients with non-specific chronic low back pain. METHODS: The INTERMED questionnaire is a biopsychosocial assessment and clinical classification tool that separates heterogeneous populations into subgroups according to case complexity. We studied 88 patients with chronic low back pain who followed an intensive multidisciplinary rehabilitation program on an outpatient basis. Before the program, we recorded the INTERMED score, radiological abnormalities, subjective pain severity, and sick leave duration. Associations between these variables and return to full-time work within 3 months after the end of the program were evaluated using one-sided Fisher tests and univariate logistic regression followed by multivariate logistic regression. RESULTS: The univariate analysis showed a significant association between the INTERMED score and return to work (P<0.001; odds ratio, 0.90; 95% confidence interval, 0.86-0.96). In the multivariate analysis, prediction was best when the INTERMED score and sick leave duration were used in combination (P=0.03; odds ratio, 0.48; 95% confidence interval, 0.25-0.93). CONCLUSION: The INTERMED questionnaire is useful for evaluating patients with chronic low back pain. It could be used to improve the selection of patients for intensive multidisciplinary programs, thereby improving the quality of care, while reducing healthcare costs.
Resumo:
Matkapuhelinvalmistajien välinen kilpailu kiristyy jatkuvasti. Tuotteiden ollessa teknisiltä ominaisuuksiltaan lähes samanvertaisia, asiakkaat alkavat kiinnittää huomiota myös puhelimen muihin ominaisuuksiin.Puhelimen koristeellisuudesta ja mandollisuudesta personoida matkapuhelin, on muodostumassa entistä tärkeämpiä tekijöitä puhelinvalmistajien välisessä kilpailussa. Asiakas ei myöskään ole valmis odottamaan puhelintaan, vaan haluaa puhelimensa nopesti. Lyhyt läpimenoaika on puhelinvalmistajille elintärkeä kilpailutekijä. Tämän työn tehtävänä on tutkija puhelinvalmistajien alihankkijan teknologisiamandollisuuksia laajentaa tuotevalikoimaansa vastaamaan tämän päivän tarpeita koristella matkaviestimiä. Teknologioioden sovellukset, Tuotantokustannukset, läpimenoajat ja teknologoiden riskit ovat tutkimuksen pääkohteet. Tavoitteena on saavuttaa käsitys kohdeyrityksen mandollisuuksista vastata asiakkaansa tarpeisiin olemassa olevilla teknologioilla ja niiden kombinaatioililla.
Resumo:
A mathematical model of the voltage drop which arises in on-chip power distribution networks is used to compare the maximum voltage drop in the case of different geometric arrangements of the pads supplying power to the chip. These include the square or Manhattan power pad arrangement, which currently predominates, as well as equilateral triangular and hexagonal arrangements. In agreement with the findings in the literature and with physical and SPICE models, the equilateral triangular power pad arrangement is found to minimize the maximum voltage drop. This headline finding is a consequence of relatively simple formulas for the voltage drop, with explicit error bounds, which are established using complex analysis techniques, and elliptic functions in particular.
Resumo:
PeerHood -verkon mobiililaitteiden akkutehon säästämiseksi siirretään mobiililaitteen verkkonaapuruston valvontatehtävät kiinteälle laitteelle. Valvontatehtävien siirto on tarkoitus tehdä silloin, kun laite pysyy paikallaan, esimerkiksi toimisto tiloissa. Laitteen pysyessä paikallaan voidaan verkkonaapurustoa seurata kiinteän laitteen resursseilla ja päivittää verkkomuutokset mobiililaitteelle tarvittaessa. Mobiililaitteen ollessa vain kuuntelutilassa laite säästää akkutehoa, koska sen ei tarvitse aktiivisesti lähettää dataa verkkolaitteillaan. Verkkolaitteet pysyvät lepotilassa ja odottavat vain tulevaa dataa. Verkkonaapuruston valvontatehtävien siirto ei kuitenkaan vaikuta käyttäjän palveluiden hyödyntämiseen, joten verkkolaitteen akkutehon säästö riippuu suuresti käyttäjän toimista, käyttäjä voi edelleen käyttää muiden PeerHood laitteiden palveluita tai tarjota omiaan.
Resumo:
Résumé L'eau est souvent considérée comme une substance ordinaire puisque elle est très commune dans la nature. En fait elle est la plus remarquable de toutes les substances. Sans l'eau la vie sur la terre n'existerait pas. L'eau représente le composant majeur de la cellule vivante, formant typiquement 70 à 95% de la masse cellulaire et elle fournit un environnement à d'innombrables organismes puisque elle couvre 75% de la surface de terre. L'eau est une molécule simple faite de deux atomes d'hydrogène et un atome d'oxygène. Sa petite taille semble en contradiction avec la subtilité de ses propriétés physiques et chimiques. Parmi celles-là, le fait que, au point triple, l'eau liquide est plus dense que la glace est particulièrement remarquable. Malgré son importance particulière dans les sciences de la vie, l'eau est systématiquement éliminée des spécimens biologiques examinés par la microscopie électronique. La raison en est que le haut vide du microscope électronique exige que le spécimen biologique soit solide. Pendant 50 ans la science de la microscopie électronique a adressé ce problème résultant en ce moment en des nombreuses techniques de préparation dont l'usage est courrant. Typiquement ces techniques consistent à fixer l'échantillon (chimiquement ou par congélation), remplacer son contenu d'eau par un plastique doux qui est transformé à un bloc rigide par polymérisation. Le bloc du spécimen est coupé en sections minces (denviron 50 nm) avec un ultramicrotome à température ambiante. En général, ces techniques introduisent plusieurs artefacts, principalement dû à l'enlèvement d'eau. Afin d'éviter ces artefacts, le spécimen peut être congelé, coupé et observé à basse température. Cependant, l'eau liquide cristallise lors de la congélation, résultant en une importante détérioration. Idéalement, l'eau liquide est solidifiée dans un état vitreux. La vitrification consiste à refroidir l'eau si rapidement que les cristaux de glace n'ont pas de temps de se former. Une percée a eu lieu quand la vitrification d'eau pure a été découverte expérimentalement. Cette découverte a ouvert la voie à la cryo-microscopie des suspensions biologiques en film mince vitrifié. Nous avons travaillé pour étendre la technique aux spécimens épais. Pour ce faire les échantillons biologiques doivent être vitrifiés, cryo-coupées en sections vitreuse et observées dans une cryo-microscope électronique. Cette technique, appelée la cryo- microscopie électronique des sections vitrifiées (CEMOVIS), est maintenant considérée comme étant la meilleure façon de conserver l'ultrastructure de tissus et cellules biologiques dans un état très proche de l'état natif. Récemment, cette technique est devenue une méthode pratique fournissant des résultats excellents. Elle a cependant, des limitations importantes, la plus importante d'entre elles est certainement dû aux artefacts de la coupe. Ces artefacts sont la conséquence de la nature du matériel vitreux et le fait que les sections vitreuses ne peuvent pas flotter sur un liquide comme c'est le cas pour les sections en plastique coupées à température ambiante. Le but de ce travail a été d'améliorer notre compréhension du processus de la coupe et des artefacts de la coupe. Nous avons ainsi trouvé des conditions optimales pour minimiser ou empêcher ces artefacts. Un modèle amélioré du processus de coupe et une redéfinitions des artefacts de coupe sont proposés. Les résultats obtenus sous ces conditions sont présentés et comparés aux résultats obtenus avec les méthodes conventionnelles. Abstract Water is often considered to be an ordinary substance since it is transparent, odourless, tasteless and it is very common in nature. As a matter of fact it can be argued that it is the most remarkable of all substances. Without water life on Earth would not exist. Water is the major component of cells, typically forming 70 to 95% of cellular mass and it provides an environment for innumerable organisms to live in, since it covers 75% of Earth surface. Water is a simple molecule made of two hydrogen atoms and one oxygen atom, H2O. The small size of the molecule stands in contrast with its unique physical and chemical properties. Among those the fact that, at the triple point, liquid water is denser than ice is especially remarkable. Despite its special importance in life science, water is systematically removed from biological specimens investigated by electron microscopy. This is because the high vacuum of the electron microscope requires that the biological specimen is observed in dry conditions. For 50 years the science of electron microscopy has addressed this problem resulting in numerous preparation techniques, presently in routine use. Typically these techniques consist in fixing the sample (chemically or by freezing), replacing its water by plastic which is transformed into rigid block by polymerisation. The block is then cut into thin sections (c. 50 nm) with an ultra-microtome at room temperature. Usually, these techniques introduce several artefacts, most of them due to water removal. In order to avoid these artefacts, the specimen can be frozen, cut and observed at low temperature. However, liquid water crystallizes into ice upon freezing, thus causing severe damage. Ideally, liquid water is solidified into a vitreous state. Vitrification consists in solidifying water so rapidly that ice crystals have no time to form. A breakthrough took place when vitrification of pure water was discovered. Since this discovery, the thin film vitrification method is used with success for the observation of biological suspensions of. small particles. Our work was to extend the method to bulk biological samples that have to be vitrified, cryosectioned into vitreous sections and observed in cryo-electron microscope. This technique is called cryo-electron microscopy of vitreous sections (CEMOVIS). It is now believed to be the best way to preserve the ultrastructure of biological tissues and cells very close to the native state for electron microscopic observation. Since recently, CEMOVIS has become a practical method achieving excellent results. It has, however, some sever limitations, the most important of them certainly being due to cutting artefacts. They are the consequence of the nature of vitreous material and the fact that vitreous sections cannot be floated on a liquid as is the case for plastic sections cut at room temperature. The aim of the present work has been to improve our understanding of the cutting process and of cutting artefacts, thus finding optimal conditions to minimise or prevent these artefacts. An improved model of the cutting process and redefinitions of cutting artefacts are proposed. Results obtained with CEMOVIS under these conditions are presented and compared with results obtained with conventional methods.
Resumo:
Les reconstructions palinspastiques fournissent le cadre idéal à de nombreuses études géologiques, géographiques, océanographique ou climatiques. En tant qu?historiens de la terre, les "reconstructeurs" essayent d?en déchiffrer le passé. Depuis qu?ils savent que les continents bougent, les géologues essayent de retracer leur évolution à travers les âges. Si l?idée originale de Wegener était révolutionnaire au début du siècle passé, nous savons depuis le début des années « soixante » que les continents ne "dérivent" pas sans but au milieu des océans mais sont inclus dans un sur-ensemble associant croûte « continentale » et « océanique »: les plaques tectoniques. Malheureusement, pour des raisons historiques aussi bien que techniques, cette idée ne reçoit toujours pas l'écho suffisant parmi la communauté des reconstructeurs. Néanmoins, nous sommes intimement convaincus qu?en appliquant certaines méthodes et certains principes il est possible d?échapper à l?approche "Wégenerienne" traditionnelle pour enfin tendre vers la tectonique des plaques. Le but principal du présent travail est d?exposer, avec tous les détails nécessaires, nos outils et méthodes. Partant des données paléomagnétiques et paléogéographiques classiquement utilisées pour les reconstructions, nous avons développé une nouvelle méthodologie replaçant les plaques tectoniques et leur cinématique au coeur du problème. En utilisant des assemblages continentaux (aussi appelés "assemblées clés") comme des points d?ancrage répartis sur toute la durée de notre étude (allant de l?Eocène jusqu?au Cambrien), nous développons des scénarios géodynamiques permettant de passer de l?une à l?autre en allant du passé vers le présent. Entre deux étapes, les plaques lithosphériques sont peu à peu reconstruites en additionnant/ supprimant les matériels océaniques (symbolisés par des isochrones synthétiques) aux continents. Excepté lors des collisions, les plaques sont bougées comme des entités propres et rigides. A travers les âges, les seuls éléments évoluant sont les limites de plaques. Elles sont préservées aux cours du temps et suivent une évolution géodynamique consistante tout en formant toujours un réseau interconnecté à travers l?espace. Cette approche appelée "limites de plaques dynamiques" intègre de multiples facteurs parmi lesquels la flottabilité des plaques, les taux d'accrétions aux rides, les courbes de subsidence, les données stratigraphiques et paléobiogéographiques aussi bien que les évènements tectoniques et magmatiques majeurs. Cette méthode offre ainsi un bon contrôle sur la cinématique des plaques et fournit de sévères contraintes au modèle. Cette approche "multi-source" nécessite une organisation et une gestion des données efficaces. Avant le début de cette étude, les masses de données nécessaires était devenues un obstacle difficilement surmontable. Les SIG (Systèmes d?Information Géographiques) et les géo-databases sont des outils informatiques spécialement dédiés à la gestion, au stockage et à l?analyse des données spatialement référencées et de leurs attributs. Grâce au développement dans ArcGIS de la base de données PaleoDyn nous avons pu convertir cette masse de données discontinues en informations géodynamiques précieuses et facilement accessibles pour la création des reconstructions. Dans le même temps, grâce à des outils spécialement développés, nous avons, tout à la fois, facilité le travail de reconstruction (tâches automatisées) et amélioré le modèle en développant fortement le contrôle cinématique par la création de modèles de vitesses des plaques. Sur la base des 340 terranes nouvellement définis, nous avons ainsi développé un set de 35 reconstructions auxquelles est toujours associé un modèle de vitesse. Grâce à cet ensemble de données unique, nous pouvons maintenant aborder des problématiques majeurs de la géologie moderne telles que l?étude des variations du niveau marin et des changements climatiques. Nous avons commencé par aborder un autre problème majeur (et non définitivement élucidé!) de la tectonique moderne: les mécanismes contrôlant les mouvements des plaques. Nous avons pu observer que, tout au long de l?histoire de la terre, les pôles de rotation des plaques (décrivant les mouvements des plaques à la surface de la terre) tendent à se répartir le long d'une bande allant du Pacifique Nord au Nord de l'Amérique du Sud, l'Atlantique Central, l'Afrique du Nord, l'Asie Centrale jusqu'au Japon. Fondamentalement, cette répartition signifie que les plaques ont tendance à fuir ce plan médian. En l'absence d'un biais méthodologique que nous n'aurions pas identifié, nous avons interprété ce phénomène comme reflétant l'influence séculaire de la Lune sur le mouvement des plaques. La Lune sur le mouvement des plaques. Le domaine océanique est la clé de voute de notre modèle. Nous avons attaché un intérêt tout particulier à le reconstruire avec beaucoup de détails. Dans ce modèle, la croûte océanique est préservée d?une reconstruction à l?autre. Le matériel crustal y est symbolisé sous la forme d?isochrones synthétiques dont nous connaissons les âges. Nous avons également reconstruit les marges (actives ou passives), les rides médio-océaniques et les subductions intra-océaniques. En utilisant ce set de données très détaillé, nous avons pu développer des modèles bathymétriques 3-D unique offrant une précision bien supérieure aux précédents.<br/><br/>Palinspastic reconstructions offer an ideal framework for geological, geographical, oceanographic and climatology studies. As historians of the Earth, "reconstructers" try to decipher the past. Since they know that continents are moving, geologists a trying to retrieve the continents distributions through ages. If Wegener?s view of continent motions was revolutionary at the beginning of the 20th century, we know, since the Early 1960?s that continents are not drifting without goal in the oceanic realm but are included in a larger set including, all at once, the oceanic and the continental crust: the tectonic plates. Unfortunately, mainly due to technical and historical issues, this idea seems not to receive a sufficient echo among our particularly concerned community. However, we are intimately convinced that, by applying specific methods and principles we can escape the traditional "Wegenerian" point of view to, at last, reach real plate tectonics. This is the main aim of this study to defend this point of view by exposing, with all necessary details, our methods and tools. Starting with the paleomagnetic and paleogeographic data classically used in reconstruction studies, we developed a modern methodology placing the plates and their kinematics at the centre of the issue. Using assemblies of continents (referred as "key assemblies") as anchors distributed all along the scope of our study (ranging from Eocene time to Cambrian time) we develop geodynamic scenarios leading from one to the next, from the past to the present. In between, lithospheric plates are progressively reconstructed by adding/removing oceanic material (symbolized by synthetic isochrones) to major continents. Except during collisions, plates are moved as single rigid entities. The only evolving elements are the plate boundaries which are preserved and follow a consistent geodynamical evolution through time and form an interconnected network through space. This "dynamic plate boundaries" approach integrates plate buoyancy factors, oceans spreading rates, subsidence patterns, stratigraphic and paleobiogeographic data, as well as major tectonic and magmatic events. It offers a good control on plate kinematics and provides severe constraints for the model. This multi-sources approach requires an efficient data management. Prior to this study, the critical mass of necessary data became a sorely surmountable obstacle. GIS and geodatabases are modern informatics tools of specifically devoted to store, analyze and manage data and associated attributes spatially referenced on the Earth. By developing the PaleoDyn database in ArcGIS software we converted the mass of scattered data offered by the geological records into valuable geodynamical information easily accessible for reconstructions creation. In the same time, by programming specific tools we, all at once, facilitated the reconstruction work (tasks automation) and enhanced the model (by highly increasing the kinematic control of plate motions thanks to plate velocity models). Based on the 340 terranes properly defined, we developed a revised set of 35 reconstructions associated to their own velocity models. Using this unique dataset we are now able to tackle major issues of the geology (such as the global sea-level variations and climate changes). We started by studying one of the major unsolved issues of the modern plate tectonics: the driving mechanism of plate motions. We observed that, all along the Earth?s history, plates rotation poles (describing plate motions across the Earth?s surface) tend to follow a slight linear distribution along a band going from the Northern Pacific through Northern South-America, Central Atlantic, Northern Africa, Central Asia up to Japan. Basically, it sighifies that plates tend to escape this median plan. In the absence of a non-identified methodological bias, we interpreted it as the potential secular influence ot the Moon on plate motions. The oceanic realms are the cornerstone of our model and we attached a particular interest to reconstruct them with many details. In this model, the oceanic crust is preserved from one reconstruction to the next. The crustal material is symbolised by the synthetic isochrons from which we know the ages. We also reconstruct the margins (active or passive), ridges and intra-oceanic subductions. Using this detailed oceanic dataset, we developed unique 3-D bathymetric models offering a better precision than all the previously existing ones.
Resumo:
IIn electric drives, frequency converters are used to generatefor the electric motor the AC voltage with variable frequency and amplitude. When considering the annual sale of drives in values of money and units sold, the use of low-performance drives appears to be in predominant. These drives have tobe very cost effective to manufacture and use, while they are also expected to fulfill the harmonic distortion standards. One of the objectives has also been to extend the lifetime of the frequency converter. In a traditional frequency converter, a relatively large electrolytic DC-link capacitor is used. Electrolytic capacitors are large, heavy and rather expensive components. In many cases, the lifetime of the electrolytic capacitor is the main factor limiting the lifetime of the frequency converter. To overcome the problem, the electrolytic capacitor is replaced with a metallized polypropylene film capacitor (MPPF). The MPPF has improved properties when compared to the electrolytic capacitor. By replacing the electrolytic capacitor with a film capacitor the energy storage of the DC-linkwill be decreased. Thus, the instantaneous power supplied to the motor correlates with the instantaneous power taken from the network. This yields a continuousDC-link current fed by the diode rectifier bridge. As a consequence, the line current harmonics clearly decrease. Because of the decreased energy storage, the DC-link voltage fluctuates. This sets additional conditions to the controllers of the frequency converter to compensate the fluctuation from the supplied motor phase voltages. In this work three-phase and single-phase frequency converters with small DC-link capacitor are analyzed. The evaluation is obtained with simulations and laboratory measurements.
Resumo:
The application of forced unsteady-state reactors in case of selective catalytic reduction of nitrogen oxides (NOx) with ammonia (NH3) is sustained by the fact that favorable temperature and composition distributions which cannot be achieved in any steady-state regime can be obtained by means of unsteady-state operations. In a normal way of operation the low exothermicity of the selective catalytic reduction (SCR) reaction (usually carried out in the range of 280-350°C) is not enough to maintain by itself the chemical reaction. A normal mode of operation usually requires supply of supplementary heat increasing in this way the overall process operation cost. Through forced unsteady-state operation, the main advantage that can be obtained when exothermic reactions take place is the possibility of trapping, beside the ammonia, the moving heat wave inside the catalytic bed. The unsteady state-operation enables the exploitation of the thermal storage capacity of the catalyticbed. The catalytic bed acts as a regenerative heat exchanger allowing auto-thermal behaviour when the adiabatic temperature rise is low. Finding the optimum reactor configuration, employing the most suitable operation model and identifying the reactor behavior are highly important steps in order to configure a proper device for industrial applications. The Reverse Flow Reactor (RFR) - a forced unsteady state reactor - corresponds to the above mentioned characteristics and may be employed as an efficient device for the treatment of dilute pollutant mixtures. As a main disadvantage, beside its advantages, the RFR presents the 'wash out' phenomena. This phenomenon represents emissions of unconverted reactants at every switch of the flow direction. As a consequence our attention was focused on finding an alternative reactor configuration for RFR which is not affected by the incontrollable emissions of unconverted reactants. In this respect the Reactor Network (RN) was investigated. Its configuration consists of several reactors connected in a closed sequence, simulating a moving bed by changing the reactants feeding position. In the RN the flow direction is maintained in the same way ensuring uniformcatalyst exploitation and in the same time the 'wash out' phenomena is annulated. The simulated moving bed (SMB) can operate in transient mode giving practically constant exit concentration and high conversion levels. The main advantage of the reactor network operation is emphasizedby the possibility to obtain auto-thermal behavior with nearly uniformcatalyst utilization. However, the reactor network presents only a small range of switching times which allow to reach and to maintain an ignited state. Even so a proper study of the complex behavior of the RN may give the necessary information to overcome all the difficulties that can appear in the RN operation. The unsteady-state reactors complexity arises from the fact that these reactor types are characterized by short contact times and complex interaction between heat and mass transportphenomena. Such complex interactions can give rise to a remarkable complex dynamic behavior characterized by a set of spatial-temporal patterns, chaotic changes in concentration and traveling waves of heat or chemical reactivity. The main efforts of the current research studies concern the improvement of contact modalities between reactants, the possibility of thermal wave storage inside the reactor and the improvement of the kinetic activity of the catalyst used. Paying attention to the above mentioned aspects is important when higher activity even at low feeding temperatures and low emissions of unconverted reactants are the main operation concerns. Also, the prediction of the reactor pseudo or steady-state performance (regarding the conversion, selectivity and thermal behavior) and the dynamicreactor response during exploitation are important aspects in finding the optimal control strategy for the forced unsteady state catalytic tubular reactors. The design of an adapted reactor requires knowledge about the influence of its operating conditions on the overall process performance and a precise evaluation of the operating parameters rage for which a sustained dynamic behavior is obtained. An apriori estimation of the system parameters result in diminution of the computational efforts. Usually the convergence of unsteady state reactor systems requires integration over hundreds of cycles depending on the initial guess of the parameter values. The investigation of various operation models and thermal transfer strategies give reliable means to obtain recuperative and regenerative devices which are capable to maintain an auto-thermal behavior in case of low exothermic reactions. In the present research work a gradual analysis of the SCR of NOx with ammonia process in forced unsteady-state reactors was realized. The investigation covers the presentationof the general problematic related to the effect of noxious emissions in the environment, the analysis of the suitable catalysts types for the process, the mathematical analysis approach for modeling and finding the system solutions and the experimental investigation of the device found to be more suitable for the present process. In order to gain information about the forced unsteady state reactor design, operation, important system parameters and their values, mathematical description, mathematicalmethod for solving systems of partial differential equations and other specific aspects, in a fast and easy way, and a case based reasoning (CBR) approach has been used. This approach, using the experience of past similarproblems and their adapted solutions, may provide a method for gaining informations and solutions for new problems related to the forced unsteady state reactors technology. As a consequence a CBR system was implemented and a corresponding tool was developed. Further on, grooving up the hypothesis of isothermal operation, the investigation by means of numerical simulation of the feasibility of the SCR of NOx with ammonia in the RFRand in the RN with variable feeding position was realized. The hypothesis of non-isothermal operation was taken into account because in our opinion ifa commercial catalyst is considered, is not possible to modify the chemical activity and its adsorptive capacity to improve the operation butis possible to change the operation regime. In order to identify the most suitable device for the unsteady state reduction of NOx with ammonia, considering the perspective of recuperative and regenerative devices, a comparative analysis of the above mentioned two devices performance was realized. The assumption of isothermal conditions in the beginningof the forced unsteadystate investigation allowed the simplification of the analysis enabling to focus on the impact of the conditions and mode of operation on the dynamic features caused by the trapping of one reactant in the reactor, without considering the impact of thermal effect on overall reactor performance. The non-isothermal system approach has been investigated in order to point out the important influence of the thermal effect on overall reactor performance, studying the possibility of RFR and RN utilization as recuperative and regenerative devices and the possibility of achieving a sustained auto-thermal behavior in case of lowexothermic reaction of SCR of NOx with ammonia and low temperature gasfeeding. Beside the influence of the thermal effect, the influence of the principal operating parameters, as switching time, inlet flow rate and initial catalyst temperature have been stressed. This analysis is important not only because it allows a comparison between the two devices and optimisation of the operation, but also the switching time is the main operating parameter. An appropriate choice of this parameter enables the fulfilment of the process constraints. The level of the conversions achieved, the more uniform temperature profiles, the uniformity ofcatalyst exploitation and the much simpler mode of operation imposed the RN as a much more suitable device for SCR of NOx with ammonia, in usual operation and also in the perspective of control strategy implementation. Theoretical simplified models have also been proposed in order to describe the forced unsteady state reactors performance and to estimate their internal temperature and concentration profiles. The general idea was to extend the study of catalytic reactor dynamics taking into account the perspectives that haven't been analyzed yet. The experimental investigation ofRN revealed a good agreement between the data obtained by model simulation and the ones obtained experimentally.
Resumo:
Diplomityössä tutkittiin elinkaariarvioinnin ja sillä saatavien tulosten käyttöä ympäristömyötäisen tuotesuunnittelun tukena. Työn alussa kuvaillaan niitä asioita, jotka yhdistävät yrityksen ympäristöasioihin. Työn aikana suoritettiin kahden yritysten tuotteille elinkaariarvioinnit. Näistä tutkimuksista saatuja käytännön kokemuksia on käytetty diplomityössä. Pääpainopiste käytännön kokemuksissa oli itse elinkaariarvioinnin tekemisessä esille tulleissa seikoissa sekä tulosten tulkinnassa ympäristömyötäisen tuotesuunnittelun tavoitteiden asettamisen näkökulmasta. Ympäristömyötäisen tuotesuunnittelun tavoitteet ovat: materiaali- ja energiakulutuksen vähentäminen, kierrätettävyyden parantaminen, tuotteen käyttöiän pidentäminen ja ympäristölle haitallisten raaka-aineiden välttäminen. Jokaisella ympäristömyötäisen tuotesuunnittelun tavoitteella on oma erityispiirteensä, joka on huomioitava elinkaariarviointia käytettäessä. Työn aikana havaittiin, että erityinen huomio on kiinnitettävä lähtötietojen keruuvaiheeseen. Vielä nykyään on vaikea saada, etenkin alihankkijoilta, kaikkia tarvittavia inventaariotietoja. Tähän elinkaariarvioinnin suorittamisen kannalta ratkaisevaan vaiheeseen, on kaikkien osapuolten kiinnitettävä tulevaisuudessa huomiota. Myös tulosten tulkintaan kiinnitettiin huomioita. Merkittävien tekijöiden ja niihin vaikutusmahdollisuuksien havaitseminen on tärkeä osa tavoitteiden asettamista. Hyvin perusteltujen tavoitteiden asettamisen edellytys on kattavat ja luotettavat elinkaariarvioinnin tulokset. Nämä taas edellyttävät riittävän laajoja, laadukkaita ja yhdenmukaisia lähtötietoja, joita vielä nykyään on harvoin saatavilla.
Resumo:
Työn tarkoituksena oli etsiä energian säästökohteita Etelä-Savon Energia Oy:n voimalaitos-, kaukolämpö- sekä sähkön siirto- ja jakelualalta. Työ liittyi Etelä-Savon Energia Oy:n sopimaan valtakunnalliseen energiansäästösopimukseen, jossa yhtiö sitoutui analysoimaan energian käyttönsä ja tekemään energia-analyysin pohjalta energiansäästösuunnitelman. Tässä työssä analysoitiin nykyinen energian käyttö ja etsittiin säästökohteita. Löydetyille säästökohteille määritettiin säästön sekä tarvittavan investoinnin suuruus ja suora takaisinmaksuaika investoinnille. Voimalaitoksen osalta ei löydetty suuria säästökohteita, mutta muutamia pienempiä lähinnä lämmityksiin liittyviä kohteita havaittiin. Kaukolämmön osalta säästökohteita löytyi lähinnä lämpökeskusten seisonta-aikaisista lämmityksistä, mutta kaukolämpöverkon osalta ei kannattavia säästökohteita löydetty. Sähkön siirron ja jakelun osalta loistehon kompensoinnin lisääminen vähentäisi häviöitä verkossa ja vähentäisi kompensoinnin tarvetta voimalaitoksen generaattorilla. Isoja säästökohteita ei löydetty, koska yhtiö on vuosien varrella tehostanut toimintaansa ja pienentänyt aktiivisesti energiahäviöitä.