949 resultados para Optimisation de formes
Resumo:
L’objectif essentiel de cette thèse est de développer un système industriel de réfrigération ou de climatisation qui permet la conversion du potentiel de l’énergie solaire en production du froid. Ce système de réfrigération est basé sur la technologie de l’éjecto-compression qui propose la compression thermique comme alternative économique à la compression mécanique coûteuse. Le sous-système de réfrigération utilise un appareil statique fiable appelé éjecteur actionné seulement par la chaleur utile qui provient de l’énergie solaire. Il est combiné à une boucle solaire composée entre autres de capteurs solaires cylindro-paraboliques à concentration. Cette combinaison a pour objectif d’atteindre des efficacités énergétiques et exergétiques globales importantes. Le stockage thermique n’est pas considéré dans ce travail de thèse mais sera intégré au système dans des perspectives futures. En première étape, un nouveau modèle numérique et thermodynamique d’un éjecteur monophasique a été développé. Ce modèle de design applique les conditions d’entrée des fluides (pression, température et vitesse) et leur débit. Il suppose que le mélange se fait à pression constante et que l’écoulement est subsonique à l’entrée du diffuseur. Il utilise un fluide réel (R141b) et la pression de sortie est imposée. D’autre part, il intègre deux innovations importantes : il utilise l'efficacité polytropique constante (plutôt que des efficacités isentropiques constantes utilisées souvent dans la littérature) et n’impose pas une valeur fixe de l'efficacité du mélange, mais la détermine à partir des conditions d'écoulement calculées. L’efficacité polytropique constante est utilisée afin de quantifier les irréversibilités au cours des procédés d’accélérations et de décélération comme dans les turbomachines. La validation du modèle numérique de design a été effectuée à l’aide d’une étude expérimentale présente dans la littérature. La seconde étape a pour but de proposer un modèle numérique basé sur des données expérimentales de la littérature et compatible à TRNSYS et un autre modèle numérique EES destinés respectivement au capteur solaire cylindro-parabolique et au sous-système de réfrigération à éjecteur. En définitive et après avoir développé les modèles numériques et thermodynamiques, une autre étude a proposé un modèle pour le système de réfrigération solaire à éjecteur intégrant ceux de ses composantes. Plusieurs études paramétriques ont été entreprises afin d’évaluer les effets de certains paramètres (surchauffe du réfrigérant, débit calorifique du caloporteur et rayonnement solaire) sur sa performance. La méthodologie proposée est basée sur les lois de la thermodynamique classique et sur les relations de la thermodynamique aux dimensions finies. De nouvelles analyses exergétiques basées sur le concept de l’exergie de transit ont permis l'évaluation de deux indicateurs thermodynamiquement importants : l’exergie produite et l’exergie consommée dont le rapport exprime l’efficacité exergétique intrinsèque. Les résultats obtenus à partir des études appliquées à l’éjecteur et au système global montrent que le calcul traditionnel de l’efficacité exergétique selon Grassmann n’est désormais pas un critère pertinent pour l'évaluation de la performance thermodynamique des éjecteurs pour les systèmes de réfrigération.
Resumo:
Résumé : Les performances de détecteurs à scintillation, composés d’un cristal scintillateur couplé à un photodétecteur, dépendent de façon critique de l’efficacité de la collecte et de l’extraction des photons de scintillation du cristal vers le capteur. Dans les systèmes d’imagerie hautement pixellisés (e.g. TEP, TDM), les scintillateurs doivent être arrangés en matrices compactes avec des facteurs de forme défavorables pour le transport des photons, au détriment des performances du détecteur. Le but du projet est d’optimiser les performances de ces détecteurs pixels par l'identification des sources de pertes de lumière liées aux caractéristiques spectrales, spatiales et angulaires des photons de scintillation incidents sur les faces des scintillateurs. De telles informations acquises par simulation Monte Carlo permettent une pondération adéquate pour l'évaluation de gains atteignables par des méthodes de structuration du scintillateur visant à une extraction de lumière améliorée vers le photodétecteur. Un plan factoriel a permis d'évaluer la magnitude de paramètres affectant la collecte de lumière, notamment l'absorption des matériaux adhésifs assurant l'intégrité matricielle des cristaux ainsi que la performance optique de réflecteurs, tous deux ayant un impact considérable sur le rendement lumineux. D'ailleurs, un réflecteur abondamment utilisé en raison de ses performances optiques exceptionnelles a été caractérisé dans des conditions davantage réalistes par rapport à une immersion dans l'air, où sa réflectivité est toujours rapportée. Une importante perte de réflectivité lorsqu'il est inséré au sein de matrices de scintillateurs a été mise en évidence par simulations puis confirmée expérimentalement. Ceci explique donc les hauts taux de diaphonie observés en plus d'ouvrir la voie à des méthodes d'assemblage en matrices limitant ou tirant profit, selon les applications, de cette transparence insoupçonnée.
Resumo:
L’objectif de cet essai est d’analyser les freins et leviers à la mise en place de synergies d’écologie industrielle et territoriale, afin de suggérer comment l’entreprise Électricité de France pourrait contribuer à leur développement. Historiquement, les échanges de flux interentreprises se faisaient déjà de manière autonome, pour des raisons économiques et pratiques. Aujourd’hui, avec l’essor de l’économie circulaire, de plus en plus d’acteurs soutiennent la mise en place de projets organisés par un tiers acteur. L’aspiration de ces démarches est de rassembler des acteurs économiques et territoriaux, dans le but d’identifier des pistes de collaborations et de trouver des solutions locales pour valoriser les flux de matière et d’énergie au sein d’un territoire. La mise en place de ces projets, permettant l’optimisation des systèmes productifs et la réalisation de gains économiques, environnementaux et sociaux, bénéficie d’ailleurs d’un certain soutien institutionnel. Toutefois, il existe encore des freins au développement d’une écologie industrielle et territoriale mature en France. Les échanges de flux créent notamment des liens d’interdépendance entre les acteurs. Dans ce contexte, comment les acteurs peuvent-ils s’organiser pour développer de nouvelles synergies ? Différents leviers peuvent être mobilisés dans le cadre de projets d’écologie industrielle et territoriale. Les subventions publiques et le financement participatif sont des appuis financiers avantageux. Une évolution de la réglementation concernant les entreprises habilitées à traiter des déchets, ainsi que le recours à la procédure de sortie de statut de déchet pourraient faciliter la mise en place de synergies. De plus, faire réaliser des études techniques par des acteurs spécialisés et construire de nouvelles structures adaptées permettraient de répondre aux besoins de valorisation grandissants. Aussi, une contractualisation appropriée des échanges permet de mieux gérer les liens d’interdépendance et de formaliser les ententes issues d’échanges transparents entre les acteurs. Enfin, la sensibilisation et l’instauration d’une dynamique collaborative auprès des parties prenantes, ainsi que des échanges réguliers entre les acteurs, favorisent leur implication et leur motivation. L’aboutissement des démarches organisées par un tiers acteur, aujourd’hui fortement incitées par les collectivités locales, ambitionne d’instaurer une dynamique collaborative entre les acteurs publics et privés des territoires, afin de provoquer l’organisation de synergies autonomes sur le long terme. L’évolution du contexte institutionnel semble être favorable au développement futur de nouvelles synergies. Toutefois, cet avancement doit aussi être accompagné par des solutions d’écoconception et d’économie de la fonctionnalité, tel que planifié dans la stratégie nationale de l’économie circulaire. Finalement, un enjeu humain majeur à aborder consiste à la sensibilisation du public et des industriels afin de susciter leur engagement. L’entreprise Électricité de France, disposant de ressources substantielles, pourrait fortement contribuer à cet accomplissement, en développant des offres innovantes et en trouvant des pistes de collaboration avec ses clients. Le renforcement de son image de marque lui permettrait de légitimer sa position d’acteur central dans la transition énergétique, de participer au développement industriel et territorial durable et de gagner en compétitivité, tout en méritant la confiance du public.
Resumo:
The conservation and valorisation of cultural heritage is of fundamental importance for our society, since it is witness to the legacies of human societies. In the case of metallic artefacts, because corrosion is a never-ending problem, the correct strategies for their cleaning and preservation must be chosen. Thus, the aim of this project was the development of protocols for cleaning archaeological copper artefacts by laser and plasma cleaning, since they allow the treatment of artefacts in a controlled and selective manner. Additionally, electrochemical characterisation of the artificial patinas was performed in order to obtain information on the protective properties of the corrosion layers. Reference copper samples with different artificial corrosion layers were used to evaluate the tested parameters. Laser cleaning tests resulted in partial removal of the corrosion products, but the lasermaterial interactions resulted in melting of the desired corrosion layers. The main obstacle for this process is that the materials that must be preserved show lower ablation thresholds than the undesired layers, which makes the proper elimination of dangerous corrosion products very difficult without damaging the artefacts. Different protocols should be developed for different patinas, and real artefacts should be characterised previous to any treatment to determine the best course of action. Low pressure hydrogen plasma cleaning treatments were performed on two kinds of patinas. In both cases the corrosion layers were partially removed. The total removal of the undesired corrosion products can probably be achieved by increasing the treatment time or applied power, or increasing the hydrogen pressure. Since the process is non-invasive and does not modify the bulk material, modifying the cleaning parameters is easy. EIS measurements show that, for the artificial patinas, the impedance increases while the patina is growing on the surface and then drops, probably due to diffusion reactions and a slow dissolution of copper. It appears from these results that the dissolution of copper is heavily influenced by diffusion phenomena and the corrosion product film porosity. Both techniques show good results for cleaning, as long as the proper parameters are used. These depend on the nature of the artefact and the corrosion layers that are found on its surface.
Resumo:
High Energy efficiency and high performance are the key regiments for Internet of Things (IoT) end-nodes. Exploiting cluster of multiple programmable processors has recently emerged as a suitable solution to address this challenge. However, one of the main bottlenecks for multi-core architectures is the instruction cache. While private caches fall into data replication and wasting area, fully shared caches lack scalability and form a bottleneck for the operating frequency. Hence we propose a hybrid solution where a larger shared cache (L1.5) is shared by multiple cores connected through a low-latency interconnect to small private caches (L1). However, it is still limited by large capacity miss with a small L1. Thus, we propose a sequential prefetch from L1 to L1.5 to improve the performance with little area overhead. Moreover, to cut the critical path for better timing, we optimized the core instruction fetch stage with non-blocking transfer by adopting a 4 x 32-bit ring buffer FIFO and adding a pipeline for the conditional branch. We present a detailed comparison of different instruction cache architectures' performance and energy efficiency recently proposed for Parallel Ultra-Low-Power clusters. On average, when executing a set of real-life IoT applications, our two-level cache improves the performance by up to 20% and loses 7% energy efficiency with respect to the private cache. Compared to a shared cache system, it improves performance by up to 17% and keeps the same energy efficiency. In the end, up to 20% timing (maximum frequency) improvement and software control enable the two-level instruction cache with prefetch adapt to various battery-powered usage cases to balance high performance and energy efficiency.
Resumo:
Il presente lavoro di tesi verte sull’analisi e l’ottimizzazione dei flussi di libri generati tra le diverse sedi della biblioteca pubblica, Trondheim folkebibliotek, situata a Trondheim, città del nord norvegese. La ricerca si inserisce nell’ambito di un progetto pluriennale, SmartLIB, che questa sta intraprendendo con l’università NTNU - Norwegian University of Science and Technology. L’obiettivo di questa tesi è quello di analizzare possibili soluzioni per ottimizzare il flusso di libri generato dagli ordini dei cittadini. Una prima fase di raccolta ed analisi dei dati è servita per avere le informazioni necessarie per procedere nella ricerca. Successivamente è stata analizzata la possibilità di ridurre i flussi andando ad associare ad ogni dipartimento la quantità di copie necessarie per coprire il 90% della domanda, seguendo la distribuzione di Poisson. In seguito, sono state analizzate tre soluzioni per ottimizzare i flussi generati dai libri, il livello di riempimento dei box ed il percorso del camion che giornalmente visita tutte le sedi della libreria. Di supporto per questo secondo studio è stato il Vehicle Routing Problem (VRP). Un modello simulativo è stato creato su Anylogic ed utilizzato per validare le soluzioni proposte. I risultati hanno portato a proporre delle soluzioni per ottimizzare i flussi complessivi, riducendo il delay time di consegna dei libri del 50%, ad una riduzione del 53% del flusso di box e ad una conseguente aumento del 44% del tasso di riempimento di ogni box. Possibili future implementazioni delle soluzioni trovate corrispondono all’installazione di una nuova Sorting Machine nella sede centrale della libreria ed all’implementazione sempre in quest’ultima di un nuovo schedule giornaliero.
Resumo:
From November 1982 to May 1999, 28 children with Rett syndrome were followed-up for a medium period of 6 years and 2 months. Regression of developmental milestones started at the age between 5 and 20 months. Nineteen cases of typical Rett syndrome had uneventful pre and perinatal periods, loss of previously acquired purposeful hand skills, mental and motor regression and developed hand stereotypies; sixteen had head growth deceleration and 12 gait apraxia. Nine patients were atypical cases, 2 formes frustres, 2 congenital, 3 with early seizure onset, 1 preserved speech and 1 male. Epilepsy was present in 21 patients, predominantly partial seizures and the drug of choise was carbamazepine (15 patients). In the initial evaluation most patients were distributed on Stages II and III and on follow-up on Stages III and IV. Three children died.
Resumo:
Single interface flow systems (SIFA) present some noteworthy advantages when compared to other flow systems, such as a simpler configuration, a more straightforward operation and control and an undemanding optimisation routine. Moreover, the plain reaction zone establishment, which relies strictly on the mutual inter-dispersion of the adjoining solutions, could be exploited to set up multiple sequential reaction schemes providing supplementary information regarding the species under determination. In this context, strategies for accuracy assessment could be favourably implemented. To this end, the sample could be processed by two quasi-independent analytical methods and the final result would be calculated after considering the two different methods. Intrinsically more precise and accurate results would be then gathered. In order to demonstrate the feasibility of the approach, a SIFA system with spectrophotometric detection was designed for the determination of lansoprazole in pharmaceutical formulations. Two reaction interfaces with two distinct pi-acceptors, chloranilic acid (CIA) and 2,3-dichloro-5,6-dicyano-p-benzoquinone (DDQ) were implemented. Linear working concentration ranges between 2.71 x 10(-4) to 8.12 x 10(-4) mol L(-1) and 2.17 x 10(-4) to 8.12 x 10(-4) mol L(-1) were obtained for DDQ and CIA methods, respectively. When compared with the results furnished by the reference procedure, the results showed relative deviations lower than 2.7%. Furthermore. the repeatability was good, with r.s.d. lower than 3.8% and 4.7% for DDQ and CIA methods, respectively. Determination rate was about 30 h(-1). (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A fully automated methodology was developed for the determination of the thyroid hormones levothyroxine (T4) and liothyronine (T3). The proposed method exploits the formation of highly coloured charge-transfer (CT) complexes between these compounds, acting as electron donors, and pi-acceptors such as chloranilic acid (CIA) and 2,3-dichloro-5,6-dicyano-p-benzoquinone (DDQ). For automation of the analytical procedure a simple, fast and versatile single interface flow system (SIFA)was implemented guaranteeing a simplified performance optimisation, low maintenance and a cost-effective operation. Moreover, the single reaction interface assured a convenient and straightforward approach for implementing job`s method of continuous variations used to establish the stoichiometry of the formed CT complexes. Linear calibration plots for levothyroxine and liothyronine concentrations ranging from 5.0 x 10(-5) to 2.5 x 10(-4) mol L(-1) and 1.0 x 10(-5) to 1.0 x 10(-4) mol L(-1), respectively, were obtained, with good precision (R.S.D. <4.6% and <3.9%) and with a determination frequency of 26 h(-1) for both drugs. The results obtained for pharmaceutical formulations were statistically comparable to the declared hormone amount with relative deviations lower than 2.1%. The accuracy was confirmed by carrying out recovery studies, which furnished recovery values ranging from 96.3% to 103.7% for levothyroxine and 100.1% for liothyronine. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
BACKGROUND: Xylitol is a sugar alcohol (polyalcohol) with many interesting properties for pharmaceutical and food products. It is currently produced by a chemical process, which has some disadvantages such as high energy requirement. Therefore microbiological production of xylitol has been studied as an alternative, but its viability is dependent on optimisation of the fermentation variables. Among these, aeration is fundamental, because xylitol is produced only under adequate oxygen availability. In most experiments with xylitol-producing yeasts, low oxygen transfer volumetric coefficient (K(L)a) values are used to maintain microaerobic conditions. However, in the present study the use of relatively high K(L)a values resulted in high xylitol production. The effect of aeration was also evaluated via the profiles of xylose reductase (XR) and xylitol clehydrogenase (XD) activities during the experiments. RESULTS: The highest XR specific activity (1.45 +/- 0.21 U mg(protein)(-1)) was achieved during the experiment with the lowest K(L)a value (12 h(-1)), while the highest XD specific activity (0.19 +/- 0.03 U mg(protein)(-1)) was observed with a K(L)a value of 25 h(-1). Xylitol production was enhanced when K(L)a was increased from 12 to 50 h(-1), which resulted in the best condition observed, corresponding to a xylitol volumetric productivity of 1.50 +/- 0.08 g(xylitol) L(-1) h(-1) and an efficiency of 71 +/- 6.0%. CONCLUSION: The results showed that the enzyme activities during xylitol bioproduction depend greatly on the initial KLa value (oxygen availability). This finding supplies important information for further studies in molecular biology and genetic engineering aimed at improving xylitol bioproduction. (C) 2008 Society of Chemical Industry
Resumo:
The design of supplementary damping controllers to mitigate the effects of electromechanical oscillations in power systems is a highly complex and time-consuming process, which requires a significant amount of knowledge from the part of the designer. In this study, the authors propose an automatic technique that takes the burden of tuning the controller parameters away from the power engineer and places it on the computer. Unlike other approaches that do the same based on robust control theories or evolutionary computing techniques, our proposed procedure uses an optimisation algorithm that works over a formulation of the classical tuning problem in terms of bilinear matrix inequalities. Using this formulation, it is possible to apply linear matrix inequality solvers to find a solution to the tuning problem via an iterative process, with the advantage that these solvers are widely available and have well-known convergence properties. The proposed algorithm is applied to tune the parameters of supplementary controllers for thyristor controlled series capacitors placed in the New England/New York benchmark test system, aiming at the improvement of the damping factor of inter-area modes, under several different operating conditions. The results of the linear analysis are validated by non-linear simulation and demonstrate the effectiveness of the proposed procedure.
Resumo:
Leakage reduction in water supply systems and distribution networks has been an increasingly important issue in the water industry since leaks and ruptures result in major physical and economic losses. Hydraulic transient solvers can be used in the system operational diagnosis, namely for leak detection purposes, due to their capability to describe the dynamic behaviour of the systems and to provide substantial amounts of data. In this research work, the association of hydraulic transient analysis with an optimisation model, through inverse transient analysis (ITA), has been used for leak detection and its location in an experimental facility containing PVC pipes. Observed transient pressure data have been used for testing ITA. A key factor for the success of the leak detection technique used is the accurate calibration of the transient solver, namely adequate boundary conditions and the description of energy dissipation effects since PVC pipes are characterised by a viscoelastic mechanical response. Results have shown that leaks were located with an accuracy between 4-15% of the total length of the pipeline, depending on the discretisation of the system model.
Resumo:
The performance optimisation of overhead conductors depends on the systematic investigation of the fretting fatigue mechanisms in the conductor/clamping system. As a consequence, a fretting fatigue rig was designed and a limited range of fatigue tests was carried out at the middle high cycle fatigue regime in order to access an exploratory S-N curve for a Grosbeak conductor, which was mounted on a mono-articulated aluminium clamping system. Subsequent to these preliminary fatigue tests, the components of the conductor/clamping system, such as ACSR conductor, upper and lower clamps, bolt and nuts, were subjected to a failure analysis procedure in order to investigate the metallurgical free variables interfering on the fatigue test results, aiming at the optimisation of the testing reproducibility. The results indicated that the rupture of the planar fracture surfaces observed in the external At strands of the conductor tested under lower bending amplitude (0.9 mm) occurred by fatigue cracking (I mm deep), followed by shear overload. The V-type fracture surfaces observed in some At strands of the conductor tested under higher bending amplitude (1.3 mm) were also produced by fatigue cracking (approximately 400 mu m deep), followed by shear overload. Shear overload fracture (45 degrees fracture surface) was also observed on the remaining At wires of the conductor tested under higher bending amplitude (1.3 mm). Additionally, the upper and lower Al-cast clamps presented microstructure-sensitive cracking, which was folowed by particle detachment and formation of abrasive debris on the clamp/conductor tribo-interface, promoting even further the fretting mechanism. The detrimental formation of abrasive debris might be inhibited by the selection of a more suitable class of as-cast At alloy for the production of clamps. Finally, the bolt/nut system showed intense degradation of the carbon steel nut (fabricated in ferritic-pearlitic carbon steel, featuring machined threads with 190 HV), with intense plastic deformation and loss of material. Proper selection of both the bolt and nut materials and the finishing processing might prevent the loss in the clamping pressure during the fretting testing. It is important to control the specification of these components (clamps, bolt and nuts) prior to the start of large scale fretting fatigue testing of the overhead conductors in order to increase the reproducibility of this assessment. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Model predictive control (MPC) is usually implemented as a control strategy where the system outputs are controlled within specified zones, instead of fixed set points. One strategy to implement the zone control is by means of the selection of different weights for the output error in the control cost function. A disadvantage of this approach is that closed-loop stability cannot be guaranteed, as a different linear controller may be activated at each time step. A way to implement a stable zone control is by means of the use of an infinite horizon cost in which the set point is an additional variable of the control problem. In this case, the set point is restricted to remain inside the output zone and an appropriate output slack variable is included in the optimisation problem to assure the recursive feasibility of the control optimisation problem. Following this approach, a robust MPC is developed for the case of multi-model uncertainty of open-loop stable systems. The controller is devoted to maintain the outputs within their corresponding feasible zone, while reaching the desired optimal input target. Simulation of a process of the oil re. ning industry illustrates the performance of the proposed strategy.
Resumo:
While the physiological adaptations that occur following endurance training in previously sedentary and recreationally active individuals are relatively well understood, the adaptations to training in already highly trained endurance athletes remain unclear. While significant improvements in endurance performance and corresponding physiological markers are evident following submaximal endurance training in sedentary and recreationally active groups, an additional increase in submaximal training (i.e. volume) in highly trained individuals does not appear to further enhance either endurance performance or associated physiological variables [e.g. peak oxygen uptake (V-dot O2peak), oxidative enzyme activity]. It seems that, for athletes who are already trained, improvements in endurance performance can be achieved only through high-intensity interval training (HIT). The limited research which has examined changes in muscle enzyme activity in highly trained athletes, following HIT, has revealed no change in oxidative or glycolytic enzyme activity, despite significant improvements in endurance performance (p < 0.05). Instead, an increase in skeletal muscle buffering capacity may be one mechanism responsible for an improvement in endurance performance. Changes in plasma volume, stroke volume, as well as muscle cation pumps, myoglobin, capillary density and fibre type characteristics have yet to be investigated in response to HIT with the highly trained athlete. Information relating to HIT programme optimisation in endurance athletes is also very sparse. Preliminary work using the velocity at which V-dot O2max is achieved (Vmax) as the interval intensity, and fractions (50 to 75%) of the time to exhaustion at Vmax (Tmax) as the interval duration has been successful in eliciting improvements in performance in long-distance runners. However, Vmax and Tmax have not been used with cyclists. Instead, HIT programme optimisation research in cyclists has revealed that repeated supramaximal sprinting may be equally effective as more traditional HIT programmes for eliciting improvements in endurance performance. Further examination of the biochemical and physiological adaptations which accompany different HIT programmes, as well as investigation into the optimal HIT programme for eliciting performance enhancements in highly trained athletes is required.