872 resultados para Optimisation granulaire
Resumo:
Résumé : Les performances de détecteurs à scintillation, composés d’un cristal scintillateur couplé à un photodétecteur, dépendent de façon critique de l’efficacité de la collecte et de l’extraction des photons de scintillation du cristal vers le capteur. Dans les systèmes d’imagerie hautement pixellisés (e.g. TEP, TDM), les scintillateurs doivent être arrangés en matrices compactes avec des facteurs de forme défavorables pour le transport des photons, au détriment des performances du détecteur. Le but du projet est d’optimiser les performances de ces détecteurs pixels par l'identification des sources de pertes de lumière liées aux caractéristiques spectrales, spatiales et angulaires des photons de scintillation incidents sur les faces des scintillateurs. De telles informations acquises par simulation Monte Carlo permettent une pondération adéquate pour l'évaluation de gains atteignables par des méthodes de structuration du scintillateur visant à une extraction de lumière améliorée vers le photodétecteur. Un plan factoriel a permis d'évaluer la magnitude de paramètres affectant la collecte de lumière, notamment l'absorption des matériaux adhésifs assurant l'intégrité matricielle des cristaux ainsi que la performance optique de réflecteurs, tous deux ayant un impact considérable sur le rendement lumineux. D'ailleurs, un réflecteur abondamment utilisé en raison de ses performances optiques exceptionnelles a été caractérisé dans des conditions davantage réalistes par rapport à une immersion dans l'air, où sa réflectivité est toujours rapportée. Une importante perte de réflectivité lorsqu'il est inséré au sein de matrices de scintillateurs a été mise en évidence par simulations puis confirmée expérimentalement. Ceci explique donc les hauts taux de diaphonie observés en plus d'ouvrir la voie à des méthodes d'assemblage en matrices limitant ou tirant profit, selon les applications, de cette transparence insoupçonnée.
Développement des bétons autoplaçants à faible teneur en poudre, Éco-BAP: formulation et performance
Resumo:
Abstract : Although concrete is a relatively green material, the astronomical volume of concrete produced worldwide annually places the concrete construction sector among the noticeable contributors to the global warming. The most polluting constituent of concrete is cement due to its production process which releases, on average, 0.83 kg CO[subscript 2] per kg of cement. Self-consolidating concrete (SCC), a type of concrete that can fill in the formwork without external vibration, is a technology that can offer a solution to the sustainability issues of concrete industry. However, all of the workability requirements of SCC originate from a higher powder content (compared to conventional concrete) which can increase both the cost of construction and the environmental impact of SCC for some applications. Ecological SCC, Eco-SCC, is a recent development combing the advantages of SCC and a significantly lower powder content. The maximum powder content of this concrete, intended for building and commercial construction, is limited to 315 kg/m[superscript 3]. Nevertheless, designing Eco-SCC can be challenging since a delicate balance between different ingredients of this concrete is required to secure a satisfactory mixture. In this Ph.D. program, the principal objective is to develop a systematic design method to produce Eco-SCC. Since the particle lattice effect (PLE) is a key parameter to design stable Eco-SCC mixtures and is not well understood, in the first phase of this research, this phenomenon is studied. The focus in this phase is on the effect of particle-size distribution (PSD) on the PLE and stability of model mixtures as well as SCC. In the second phase, the design protocol is developed, and the properties of obtained Eco-SCC mixtures in both fresh and hardened states are evaluated. Since the assessment of robustness is crucial for successful production of concrete on large-scale, in the final phase of this work, the robustness of one the best-performing mixtures of Phase II is examined. It was found that increasing the volume fraction of a stable size-class results in an increase in the stability of that class, which in turn contributes to a higher PLE of the granular skeleton and better stability of the system. It was shown that a continuous PSD in which the volume fraction of each size class is larger than the consecutive coarser class can increase the PLE. Using such PSD was shown to allow for a substantial increase in the fluidity of SCC mixture without compromising the segregation resistance. An index to predict the segregation potential of a suspension of particles in a yield stress fluid was proposed. In the second phase of the dissertation, a five-step design method for Eco-SCC was established. The design protocol started with the determination of powder and water contents followed by the optimization of sand and coarse aggregate volume fractions according to an ideal PSD model (Funk and Dinger). The powder composition was optimized in the third step to minimize the water demand while securing adequate performance in the hardened state. The superplasticizer (SP) content of the mixtures was determined in next step. The last step dealt with the assessment of the global warming potential of the formulated Eco-SCC mixtures. The optimized Eco-SCC mixtures met all the requirements of self-consolidation in the fresh state. The 28-day compressive strength of such mixtures complied with the target range of 25 to 35 MPa. In addition, the mixtures showed sufficient performance in terms of drying shrinkage, electrical resistivity, and frost durability for the intended applications. The eco-performance of the developed mixtures was satisfactory as well. It was demonstrated in the last phase that the robustness of Eco-SCC is generally good with regards to water content variations and coarse aggregate characteristics alterations. Special attention must be paid to the dosage of SP during batching.
Resumo:
The conservation and valorisation of cultural heritage is of fundamental importance for our society, since it is witness to the legacies of human societies. In the case of metallic artefacts, because corrosion is a never-ending problem, the correct strategies for their cleaning and preservation must be chosen. Thus, the aim of this project was the development of protocols for cleaning archaeological copper artefacts by laser and plasma cleaning, since they allow the treatment of artefacts in a controlled and selective manner. Additionally, electrochemical characterisation of the artificial patinas was performed in order to obtain information on the protective properties of the corrosion layers. Reference copper samples with different artificial corrosion layers were used to evaluate the tested parameters. Laser cleaning tests resulted in partial removal of the corrosion products, but the lasermaterial interactions resulted in melting of the desired corrosion layers. The main obstacle for this process is that the materials that must be preserved show lower ablation thresholds than the undesired layers, which makes the proper elimination of dangerous corrosion products very difficult without damaging the artefacts. Different protocols should be developed for different patinas, and real artefacts should be characterised previous to any treatment to determine the best course of action. Low pressure hydrogen plasma cleaning treatments were performed on two kinds of patinas. In both cases the corrosion layers were partially removed. The total removal of the undesired corrosion products can probably be achieved by increasing the treatment time or applied power, or increasing the hydrogen pressure. Since the process is non-invasive and does not modify the bulk material, modifying the cleaning parameters is easy. EIS measurements show that, for the artificial patinas, the impedance increases while the patina is growing on the surface and then drops, probably due to diffusion reactions and a slow dissolution of copper. It appears from these results that the dissolution of copper is heavily influenced by diffusion phenomena and the corrosion product film porosity. Both techniques show good results for cleaning, as long as the proper parameters are used. These depend on the nature of the artefact and the corrosion layers that are found on its surface.
Resumo:
High Energy efficiency and high performance are the key regiments for Internet of Things (IoT) end-nodes. Exploiting cluster of multiple programmable processors has recently emerged as a suitable solution to address this challenge. However, one of the main bottlenecks for multi-core architectures is the instruction cache. While private caches fall into data replication and wasting area, fully shared caches lack scalability and form a bottleneck for the operating frequency. Hence we propose a hybrid solution where a larger shared cache (L1.5) is shared by multiple cores connected through a low-latency interconnect to small private caches (L1). However, it is still limited by large capacity miss with a small L1. Thus, we propose a sequential prefetch from L1 to L1.5 to improve the performance with little area overhead. Moreover, to cut the critical path for better timing, we optimized the core instruction fetch stage with non-blocking transfer by adopting a 4 x 32-bit ring buffer FIFO and adding a pipeline for the conditional branch. We present a detailed comparison of different instruction cache architectures' performance and energy efficiency recently proposed for Parallel Ultra-Low-Power clusters. On average, when executing a set of real-life IoT applications, our two-level cache improves the performance by up to 20% and loses 7% energy efficiency with respect to the private cache. Compared to a shared cache system, it improves performance by up to 17% and keeps the same energy efficiency. In the end, up to 20% timing (maximum frequency) improvement and software control enable the two-level instruction cache with prefetch adapt to various battery-powered usage cases to balance high performance and energy efficiency.
Resumo:
Il presente lavoro di tesi verte sull’analisi e l’ottimizzazione dei flussi di libri generati tra le diverse sedi della biblioteca pubblica, Trondheim folkebibliotek, situata a Trondheim, città del nord norvegese. La ricerca si inserisce nell’ambito di un progetto pluriennale, SmartLIB, che questa sta intraprendendo con l’università NTNU - Norwegian University of Science and Technology. L’obiettivo di questa tesi è quello di analizzare possibili soluzioni per ottimizzare il flusso di libri generato dagli ordini dei cittadini. Una prima fase di raccolta ed analisi dei dati è servita per avere le informazioni necessarie per procedere nella ricerca. Successivamente è stata analizzata la possibilità di ridurre i flussi andando ad associare ad ogni dipartimento la quantità di copie necessarie per coprire il 90% della domanda, seguendo la distribuzione di Poisson. In seguito, sono state analizzate tre soluzioni per ottimizzare i flussi generati dai libri, il livello di riempimento dei box ed il percorso del camion che giornalmente visita tutte le sedi della libreria. Di supporto per questo secondo studio è stato il Vehicle Routing Problem (VRP). Un modello simulativo è stato creato su Anylogic ed utilizzato per validare le soluzioni proposte. I risultati hanno portato a proporre delle soluzioni per ottimizzare i flussi complessivi, riducendo il delay time di consegna dei libri del 50%, ad una riduzione del 53% del flusso di box e ad una conseguente aumento del 44% del tasso di riempimento di ogni box. Possibili future implementazioni delle soluzioni trovate corrispondono all’installazione di una nuova Sorting Machine nella sede centrale della libreria ed all’implementazione sempre in quest’ultima di un nuovo schedule giornaliero.
Resumo:
Single interface flow systems (SIFA) present some noteworthy advantages when compared to other flow systems, such as a simpler configuration, a more straightforward operation and control and an undemanding optimisation routine. Moreover, the plain reaction zone establishment, which relies strictly on the mutual inter-dispersion of the adjoining solutions, could be exploited to set up multiple sequential reaction schemes providing supplementary information regarding the species under determination. In this context, strategies for accuracy assessment could be favourably implemented. To this end, the sample could be processed by two quasi-independent analytical methods and the final result would be calculated after considering the two different methods. Intrinsically more precise and accurate results would be then gathered. In order to demonstrate the feasibility of the approach, a SIFA system with spectrophotometric detection was designed for the determination of lansoprazole in pharmaceutical formulations. Two reaction interfaces with two distinct pi-acceptors, chloranilic acid (CIA) and 2,3-dichloro-5,6-dicyano-p-benzoquinone (DDQ) were implemented. Linear working concentration ranges between 2.71 x 10(-4) to 8.12 x 10(-4) mol L(-1) and 2.17 x 10(-4) to 8.12 x 10(-4) mol L(-1) were obtained for DDQ and CIA methods, respectively. When compared with the results furnished by the reference procedure, the results showed relative deviations lower than 2.7%. Furthermore. the repeatability was good, with r.s.d. lower than 3.8% and 4.7% for DDQ and CIA methods, respectively. Determination rate was about 30 h(-1). (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A fully automated methodology was developed for the determination of the thyroid hormones levothyroxine (T4) and liothyronine (T3). The proposed method exploits the formation of highly coloured charge-transfer (CT) complexes between these compounds, acting as electron donors, and pi-acceptors such as chloranilic acid (CIA) and 2,3-dichloro-5,6-dicyano-p-benzoquinone (DDQ). For automation of the analytical procedure a simple, fast and versatile single interface flow system (SIFA)was implemented guaranteeing a simplified performance optimisation, low maintenance and a cost-effective operation. Moreover, the single reaction interface assured a convenient and straightforward approach for implementing job`s method of continuous variations used to establish the stoichiometry of the formed CT complexes. Linear calibration plots for levothyroxine and liothyronine concentrations ranging from 5.0 x 10(-5) to 2.5 x 10(-4) mol L(-1) and 1.0 x 10(-5) to 1.0 x 10(-4) mol L(-1), respectively, were obtained, with good precision (R.S.D. <4.6% and <3.9%) and with a determination frequency of 26 h(-1) for both drugs. The results obtained for pharmaceutical formulations were statistically comparable to the declared hormone amount with relative deviations lower than 2.1%. The accuracy was confirmed by carrying out recovery studies, which furnished recovery values ranging from 96.3% to 103.7% for levothyroxine and 100.1% for liothyronine. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
BACKGROUND: Xylitol is a sugar alcohol (polyalcohol) with many interesting properties for pharmaceutical and food products. It is currently produced by a chemical process, which has some disadvantages such as high energy requirement. Therefore microbiological production of xylitol has been studied as an alternative, but its viability is dependent on optimisation of the fermentation variables. Among these, aeration is fundamental, because xylitol is produced only under adequate oxygen availability. In most experiments with xylitol-producing yeasts, low oxygen transfer volumetric coefficient (K(L)a) values are used to maintain microaerobic conditions. However, in the present study the use of relatively high K(L)a values resulted in high xylitol production. The effect of aeration was also evaluated via the profiles of xylose reductase (XR) and xylitol clehydrogenase (XD) activities during the experiments. RESULTS: The highest XR specific activity (1.45 +/- 0.21 U mg(protein)(-1)) was achieved during the experiment with the lowest K(L)a value (12 h(-1)), while the highest XD specific activity (0.19 +/- 0.03 U mg(protein)(-1)) was observed with a K(L)a value of 25 h(-1). Xylitol production was enhanced when K(L)a was increased from 12 to 50 h(-1), which resulted in the best condition observed, corresponding to a xylitol volumetric productivity of 1.50 +/- 0.08 g(xylitol) L(-1) h(-1) and an efficiency of 71 +/- 6.0%. CONCLUSION: The results showed that the enzyme activities during xylitol bioproduction depend greatly on the initial KLa value (oxygen availability). This finding supplies important information for further studies in molecular biology and genetic engineering aimed at improving xylitol bioproduction. (C) 2008 Society of Chemical Industry
Resumo:
The design of supplementary damping controllers to mitigate the effects of electromechanical oscillations in power systems is a highly complex and time-consuming process, which requires a significant amount of knowledge from the part of the designer. In this study, the authors propose an automatic technique that takes the burden of tuning the controller parameters away from the power engineer and places it on the computer. Unlike other approaches that do the same based on robust control theories or evolutionary computing techniques, our proposed procedure uses an optimisation algorithm that works over a formulation of the classical tuning problem in terms of bilinear matrix inequalities. Using this formulation, it is possible to apply linear matrix inequality solvers to find a solution to the tuning problem via an iterative process, with the advantage that these solvers are widely available and have well-known convergence properties. The proposed algorithm is applied to tune the parameters of supplementary controllers for thyristor controlled series capacitors placed in the New England/New York benchmark test system, aiming at the improvement of the damping factor of inter-area modes, under several different operating conditions. The results of the linear analysis are validated by non-linear simulation and demonstrate the effectiveness of the proposed procedure.
Resumo:
Leakage reduction in water supply systems and distribution networks has been an increasingly important issue in the water industry since leaks and ruptures result in major physical and economic losses. Hydraulic transient solvers can be used in the system operational diagnosis, namely for leak detection purposes, due to their capability to describe the dynamic behaviour of the systems and to provide substantial amounts of data. In this research work, the association of hydraulic transient analysis with an optimisation model, through inverse transient analysis (ITA), has been used for leak detection and its location in an experimental facility containing PVC pipes. Observed transient pressure data have been used for testing ITA. A key factor for the success of the leak detection technique used is the accurate calibration of the transient solver, namely adequate boundary conditions and the description of energy dissipation effects since PVC pipes are characterised by a viscoelastic mechanical response. Results have shown that leaks were located with an accuracy between 4-15% of the total length of the pipeline, depending on the discretisation of the system model.
Resumo:
The performance optimisation of overhead conductors depends on the systematic investigation of the fretting fatigue mechanisms in the conductor/clamping system. As a consequence, a fretting fatigue rig was designed and a limited range of fatigue tests was carried out at the middle high cycle fatigue regime in order to access an exploratory S-N curve for a Grosbeak conductor, which was mounted on a mono-articulated aluminium clamping system. Subsequent to these preliminary fatigue tests, the components of the conductor/clamping system, such as ACSR conductor, upper and lower clamps, bolt and nuts, were subjected to a failure analysis procedure in order to investigate the metallurgical free variables interfering on the fatigue test results, aiming at the optimisation of the testing reproducibility. The results indicated that the rupture of the planar fracture surfaces observed in the external At strands of the conductor tested under lower bending amplitude (0.9 mm) occurred by fatigue cracking (I mm deep), followed by shear overload. The V-type fracture surfaces observed in some At strands of the conductor tested under higher bending amplitude (1.3 mm) were also produced by fatigue cracking (approximately 400 mu m deep), followed by shear overload. Shear overload fracture (45 degrees fracture surface) was also observed on the remaining At wires of the conductor tested under higher bending amplitude (1.3 mm). Additionally, the upper and lower Al-cast clamps presented microstructure-sensitive cracking, which was folowed by particle detachment and formation of abrasive debris on the clamp/conductor tribo-interface, promoting even further the fretting mechanism. The detrimental formation of abrasive debris might be inhibited by the selection of a more suitable class of as-cast At alloy for the production of clamps. Finally, the bolt/nut system showed intense degradation of the carbon steel nut (fabricated in ferritic-pearlitic carbon steel, featuring machined threads with 190 HV), with intense plastic deformation and loss of material. Proper selection of both the bolt and nut materials and the finishing processing might prevent the loss in the clamping pressure during the fretting testing. It is important to control the specification of these components (clamps, bolt and nuts) prior to the start of large scale fretting fatigue testing of the overhead conductors in order to increase the reproducibility of this assessment. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Model predictive control (MPC) is usually implemented as a control strategy where the system outputs are controlled within specified zones, instead of fixed set points. One strategy to implement the zone control is by means of the selection of different weights for the output error in the control cost function. A disadvantage of this approach is that closed-loop stability cannot be guaranteed, as a different linear controller may be activated at each time step. A way to implement a stable zone control is by means of the use of an infinite horizon cost in which the set point is an additional variable of the control problem. In this case, the set point is restricted to remain inside the output zone and an appropriate output slack variable is included in the optimisation problem to assure the recursive feasibility of the control optimisation problem. Following this approach, a robust MPC is developed for the case of multi-model uncertainty of open-loop stable systems. The controller is devoted to maintain the outputs within their corresponding feasible zone, while reaching the desired optimal input target. Simulation of a process of the oil re. ning industry illustrates the performance of the proposed strategy.
Resumo:
While the physiological adaptations that occur following endurance training in previously sedentary and recreationally active individuals are relatively well understood, the adaptations to training in already highly trained endurance athletes remain unclear. While significant improvements in endurance performance and corresponding physiological markers are evident following submaximal endurance training in sedentary and recreationally active groups, an additional increase in submaximal training (i.e. volume) in highly trained individuals does not appear to further enhance either endurance performance or associated physiological variables [e.g. peak oxygen uptake (V-dot O2peak), oxidative enzyme activity]. It seems that, for athletes who are already trained, improvements in endurance performance can be achieved only through high-intensity interval training (HIT). The limited research which has examined changes in muscle enzyme activity in highly trained athletes, following HIT, has revealed no change in oxidative or glycolytic enzyme activity, despite significant improvements in endurance performance (p < 0.05). Instead, an increase in skeletal muscle buffering capacity may be one mechanism responsible for an improvement in endurance performance. Changes in plasma volume, stroke volume, as well as muscle cation pumps, myoglobin, capillary density and fibre type characteristics have yet to be investigated in response to HIT with the highly trained athlete. Information relating to HIT programme optimisation in endurance athletes is also very sparse. Preliminary work using the velocity at which V-dot O2max is achieved (Vmax) as the interval intensity, and fractions (50 to 75%) of the time to exhaustion at Vmax (Tmax) as the interval duration has been successful in eliciting improvements in performance in long-distance runners. However, Vmax and Tmax have not been used with cyclists. Instead, HIT programme optimisation research in cyclists has revealed that repeated supramaximal sprinting may be equally effective as more traditional HIT programmes for eliciting improvements in endurance performance. Further examination of the biochemical and physiological adaptations which accompany different HIT programmes, as well as investigation into the optimal HIT programme for eliciting performance enhancements in highly trained athletes is required.
Resumo:
The XSophe-Sophe-XeprView((R)) computer simulation software suite enables scientists to easily determine spin Hamiltonian parameters from isotropic, randomly oriented and single crystal continuous wave electron paramagnetic resonance (CW EPR) spectra from radicals and isolated paramagnetic metal ion centers or clusters found in metalloproteins, chemical systems and materials science. XSophe provides an X-windows graphical user interface to the Sophe programme and allows: creation of multiple input files, local and remote execution of Sophe, the display of sophelog (output from Sophe) and input parameters/files. Sophe is a sophisticated computer simulation software programme employing a number of innovative technologies including; the Sydney OPera HousE (SOPHE) partition and interpolation schemes, a field segmentation algorithm, the mosaic misorientation linewidth model, parallelization and spectral optimisation. In conjunction with the SOPHE partition scheme and the field segmentation algorithm, the SOPHE interpolation scheme and the mosaic misorientation linewidth model greatly increase the speed of simulations for most spin systems. Employing brute force matrix diagonalization in the simulation of an EPR spectrum from a high spin Cr(III) complex with the spin Hamiltonian parameters g(e) = 2.00, D = 0.10 cm(-1), E/D = 0.25, A(x) = 120.0, A(y) = 120.0, A(z) = 240.0 x 10(-4) cm(-1) requires a SOPHE grid size of N = 400 (to produce a good signal to noise ratio) and takes 229.47 s. In contrast the use of either the SOPHE interpolation scheme or the mosaic misorientation linewidth model requires a SOPHE grid size of only N = 18 and takes 44.08 and 0.79 s, respectively. Results from Sophe are transferred via the Common Object Request Broker Architecture (CORBA) to XSophe and subsequently to XeprView((R)) where the simulated CW EPR spectra (1D and 2D) can be compared to the experimental spectra. Energy level diagrams, transition roadmaps and transition surfaces aid the interpretation of complicated randomly oriented CW EPR spectra and can be viewed with a web browser and an OpenInventor scene graph viewer.
Resumo:
Power system real time security assessment is one of the fundamental modules of the electricity markets. Typically, when a contingency occurs, it is required that security assessment and enhancement module shall be ready for action within about 20 minutes’ time to meet the real time requirement. The recent California black out again highlighted the importance of system security. This paper proposed an approach for power system security assessment and enhancement based on the information provided from the pre-defined system parameter space. The proposed scheme opens up an efficient way for real time security assessment and enhancement in a competitive electricity market for single contingency case