983 resultados para system improvements
Resumo:
To achieve sustainability in the area of transport we need to view the decision-making process as a whole and consider all the most important socio-economic and environmental aspects involved. Improvements in transport infrastructures have a positive impact on regional development and significant repercussions on the economy, as well as affecting a large number of ecological processes. This article presents a DSS to assess the territorial effects of new linear transport infrastructures based on the use of GIS. The TITIM ? Transport Infrastructure Territorial Impact Measurement ? GIS tool allows these effects to be calculated by evaluating the improvement in accessibility, loss of landscape connectivity, and the impact on other local territorial variables such as landscape quality, biodiversity and land-use quality. The TITIM GIS tool assesses these variables automatically, simply by entering the required inputs, and thus avoiding the manual reiteration and execution of these multiple processes. TITIM allows researchers to use their own GIS databases as inputs, in contrast with other tools that use official or predefined maps. The TITIM GIS-tool is tested by application to six HSR projects in the Spanish Strategic Transport and Infrastructure Plan 2005?2020 (PEIT). The tool creates all 65 possible combinations of these projects, which will be the real test scenarios. For each one, the tool calculates the accessibility improvement, the landscape connectivity loss, and the impact on the landscape, biodiversity and land-use quality. The results reveal which of the HSR projects causes the greatest benefit to the transport system, any potential synergies that exist, and help define a priority for implementing the infrastructures in the plan
Resumo:
La iluminación con diodos emisores de luz (LED) está reemplazando cada vez en mayor medida a las fuentes de luz tradicionales. La iluminación LED ofrece ventajas en eficiencia, consumo de energía, diseño, tamaño y calidad de la luz. Durante más de 50 años, los investigadores han estado trabajando en mejoras LED. Su principal relevancia para la iluminación está aumentando rápidamente. Esta tesis se centra en un campo de aplicación importante, como son los focos. Se utilizan para enfocar la luz en áreas definidas, en objetos sobresalientes en condiciones profesionales. Esta iluminación de alto rendimiento requiere una calidad de luz definida, que incluya temperaturas ajustables de color correlacionadas (CCT), de alto índice de reproducción cromática (CRI), altas eficiencias, y colores vivos y brillantes. En el paquete LED varios chips de diferentes colores (rojo, azul, fósforo convertido) se combinan para cumplir con la distribución de energía espectral con alto CRI. Para colimar la luz en los puntos concretos deseados con un ángulo de emisión determinado, se utilizan blancos sintonizables y diversos colores de luz y ópticas secundarias. La combinación de una fuente LED de varios colores con elementos ópticos puede causar falta de homogeneidad cromática en la distribución espacial y angular de la luz, que debe resolverse en el diseño óptico. Sin embargo, no hay necesidad de uniformidad perfecta en el punto de luz debido al umbral en la percepción visual del ojo humano. Por lo tanto, se requiere una descripción matemática del nivel de uniformidad del color con respecto a la percepción visual. Esta tesis está organizada en siete capítulos. Después de un capítulo inicial que presenta la motivación que ha guiado la investigación de esta tesis, en el capítulo 2 se presentan los fundamentos científicos de la uniformidad del color en luces concentradas, como son: el espacio de color aplicado CIELAB, la percepción visual del color, los fundamentos de diseño de focos respecto a los motores de luz y ópticas no formadoras de imágenes, y los últimos avances en la evaluación de la uniformidad del color en el campo de los focos. El capítulo 3 desarrolla diferentes métodos para la descripción matemática de la distribución espacial del color en un área definida, como son la diferencia de color máxima, la desviación media del color, el gradiente de la distribución espacial de color, así como la suavidad radial y axial. Cada función se refiere a los diferentes factores que influyen en la visión, los cuales necesitan un tratamiento distinto que el de los datos que se tendrán en cuenta, además de funciones de ponderación que pre- y post-procesan los datos simulados o medidos para la reducción del ruido, la luminancia de corte, la aplicación de la ponderación de luminancia, la función de sensibilidad de contraste, y la función de distribución acumulativa. En el capítulo 4, se obtiene la función de mérito Usl para la estimación de la uniformidad del color percibida en focos. Se basó en los resultados de dos conjuntos de experimentos con factor humano realizados para evaluar la percepción visual de los sujetos de los patrones de focos típicos. El primer experimento con factor humano dio lugar al orden de importancia percibida de los focos. El orden de rango percibido se utilizó para correlacionar las descripciones matemáticas de las funciones básicas y la función ponderada sobre la distribución espacial del color, que condujo a la función Usl. El segundo experimento con factor humano probó la percepción de los focos bajo condiciones ambientales diversas, con el objetivo de proporcionar una escala absoluta para Usl, para poder así sustituir la opinión subjetiva personal de los individuos por una función de mérito estandarizada. La validación de la función Usl se presenta en relación con el alcance de la aplicación y condiciones, así como las limitaciones y restricciones que se realizan en el capítulo 5. Se compararon los datos medidos y simulados de varios sistemas ópticos. Se discuten los campos de aplicación , así como validaciones y restricciones de la función. El capítulo 6 presenta el diseño del sistema de focos y su optimización. Una evaluación muestra el análisis de sistemas basados en el reflector y la lente TIR. Los sistemas ópticos simulados se comparan en la uniformidad del color Usl, sensibilidad a las sombras coloreadas, eficiencia e intensidad luminosa máxima. Se ha comprobado que no hay un sistema único que obtenga los mejores resultados en todas las categorías, y que una excelente uniformidad de color se pudo alcanzar por la conjunción de dos sistemas diferentes. Finalmente, el capítulo 7 presenta el resumen de esta tesis y la perspectiva para investigar otros aspectos. ABSTRACT Illumination with light-emitting diodes (LED) is more and more replacing traditional light sources. They provide advantages in efficiency, energy consumption, design, size and light quality. For more than 50 years, researchers have been working on LED improvements. Their main relevance for illumination is rapidly increasing. This thesis is focused on one important field of application which are spotlights. They are used to focus light on defined areas, outstanding objects in professional conditions. This high performance illumination required a defined light quality including tunable correlated color temperatures (CCT), high color rendering index (CRI), high efficiencies and bright, vivid colors. Several differently colored chips (red, blue, phosphor converted) in the LED package are combined to meet spectral power distribution with high CRI, tunable white and several light colors and secondary optics are used to collimate the light into the desired narrow spots with defined angle of emission. The combination of multi-color LED source and optical elements may cause chromatic inhomogeneities in spatial and angular light distribution which needs to solved at the optical design. However, there is no need for perfect uniformity in the spot light due to threshold in visual perception of human eye. Therefore, a mathematical description of color uniformity level with regard to visual perception is required. This thesis is organized seven seven chapters. After an initial one presenting the motivation that has guided the research of this thesis, Chapter 2 introduces the scientific basics of color uniformity in spot lights including: the applied color space CIELAB, the visual color perception, the spotlight design fundamentals with regards to light engines and nonimaging optics, and the state of the art for the evaluation of color uniformity in the far field of spotlights. Chapter 3 develops different methods for mathematical description of spatial color distribution in a defined area, which are the maximum color difference, the average color deviation, the gradient of spatial color distribution as well as the radial and axial smoothness. Each function refers to different visual influencing factors, and they need different handling of data be taken into account, along with weighting functions which pre- and post-process the simulated or measured data for noise reduction, luminance cutoff, the implementation of luminance weighting, contrast sensitivity function, and cumulative distribution function. In chapter 4, the merit function Usl for the estimation of the perceived color uniformity in spotlights is derived. It was based on the results of two sets of human factor experiments performed to evaluate the visual perception of typical spotlight patterns by subjects. The first human factor experiment resulted in the perceived rank order of the spotlights. The perceived rank order was used to correlate the mathematical descriptions of basic functions and weighted function concerning the spatial color distribution, which lead to the Usl function. The second human factor experiment tested the perception of spotlights under varied environmental conditions, with to objective to provide an absolute scale for Usl, so the subjective personal opinion of individuals could be replaced by a standardized merit function. The validation of the Usl function is presented concerning the application range and conditions as well as limitations and restrictions in carried out in chapter 5. Measured and simulated data of various optical several systems were compared. Fields of applications are discussed as well as validations and restrictions of the function. Chapter 6 presents spotlight system design and their optimization. An evaluation shows the analysis of reflector-based and TIR lens systems. The simulated optical systems are compared in color uniformity Usl , sensitivity to colored shadows, efficiency, and peak luminous intensity. It has been found that no single system which performed best in all categories, and that excellent color uniformity could be reached by two different system assemblies. Finally, chapter 7 summarizes the conclusions of the present thesis and an outlook for further investigation topics.
Resumo:
The increasing economic competition drives the industry to implement tools that improve their processes efficiencies. The process automation is one of these tools, and the Real Time Optimization (RTO) is an automation methodology that considers economic aspects to update the process control in accordance with market prices and disturbances. Basically, RTO uses a steady-state phenomenological model to predict the process behavior, and then, optimizes an economic objective function subject to this model. Although largely implemented in industry, there is not a general agreement about the benefits of implementing RTO due to some limitations discussed in the present work: structural plant/model mismatch, identifiability issues and low frequency of set points update. Some alternative RTO approaches have been proposed in literature to handle the problem of structural plant/model mismatch. However, there is not a sensible comparison evaluating the scope and limitations of these RTO approaches under different aspects. For this reason, the classical two-step method is compared to more recently derivative-based methods (Modifier Adaptation, Integrated System Optimization and Parameter estimation, and Sufficient Conditions of Feasibility and Optimality) using a Monte Carlo methodology. The results of this comparison show that the classical RTO method is consistent, providing a model flexible enough to represent the process topology, a parameter estimation method appropriate to handle measurement noise characteristics and a method to improve the sample information quality. At each iteration, the RTO methodology updates some key parameter of the model, where it is possible to observe identifiability issues caused by lack of measurements and measurement noise, resulting in bad prediction ability. Therefore, four different parameter estimation approaches (Rotational Discrimination, Automatic Selection and Parameter estimation, Reparametrization via Differential Geometry and classical nonlinear Least Square) are evaluated with respect to their prediction accuracy, robustness and speed. The results show that the Rotational Discrimination method is the most suitable to be implemented in a RTO framework, since it requires less a priori information, it is simple to be implemented and avoid the overfitting caused by the Least Square method. The third RTO drawback discussed in the present thesis is the low frequency of set points update, this problem increases the period in which the process operates at suboptimum conditions. An alternative to handle this problem is proposed in this thesis, by integrating the classic RTO and Self-Optimizing control (SOC) using a new Model Predictive Control strategy. The new approach demonstrates that it is possible to reduce the problem of low frequency of set points updates, improving the economic performance. Finally, the practical aspects of the RTO implementation are carried out in an industrial case study, a Vapor Recompression Distillation (VRD) process located in Paulínea refinery from Petrobras. The conclusions of this study suggest that the model parameters are successfully estimated by the Rotational Discrimination method; the RTO is able to improve the process profit in about 3%, equivalent to 2 million dollars per year; and the integration of SOC and RTO may be an interesting control alternative for the VRD process.
Resumo:
At the present time there is a high pressure toward the improvement of all the production processes. Those improvements can be sensed in several directions in particular those that involve energy efficiency. The definition of tight energy efficiency improvement policies is transversal to several operational areas ranging from industry to public services. As can be expected, agricultural processes are not immune to this tendency. This statement takes more severe contours when dealing with indoor productions where it is required to artificially control the climate inside the building or a partial growing zone. Regarding the latter, this paper presents an innovative system that improves energy efficiency of a trees growing platform. This new system requires the control of both a water pump and a gas heating system based on information provided by an array of sensors. In order to do this, a multi-input, multi-output regulator was implemented by means of a Fuzzy logic control strategy. Presented results show that it is possible to simultaneously keep track of the desired growing temperature set-point while maintaining actuators stress within an acceptable range.
Resumo:
This BEEP explains the mechanism of the EU Emissions Trading System (ETS) for the greenhouse gas carbon dioxide and explore into its likely sustainability impact on European industry. In doing so, it focuses on energy-intensive industries like cement, steel and aluminium production as well as on the emerging hydrogen economy. The BEEP concludes that at the moment it is still very inconsistently implemented and has a fairly narrow scope regarding greenhouse gases and involved sectors. It may also give an incentive to relocate for energy-intensive industries. In its current format, the EU ETS does not yet properly facilitate long term innovation dynamics such as the transition to a hydrogen economy. Nevertheless, the EU ETS is foremost a working system that – with some improvements – has the potential to become a pillar for effective and efficient climate change policy that also gives incentives for investment into climate friendly policies.
Resumo:
Mode of access: Internet.
Resumo:
Mode of access: Internet.
Resumo:
"September 1990."
Resumo:
The edge-to-edge matching model has been further developed along with the Cu/Cr system as an example. The conditions for zigzag atom rows to be matching directions are included and the critical value of interatomic spacing misfit along matching directions and the critical value of d-value mismatch between matching planes are proposed in the new version of the model. (c) 2005 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Resumo:
Background Although both strength training (ST) and endurance training (ET) seem to be beneficial in type 2 diabetes mellitus (T2D), little is known about post-exercise glucose profiles. The objective of the study was to report changes in blood glucose (BG) values after a 4-month ET and ST programme now that a device for continuous glucose monitoring has become available. Materials and methods Fifteen participants, comprising four men age 56.5 +/- 0.9 years and 11 women age 57.4 +/- 0.9 years with T2D, were monitored with the MiniMed (Northridge, CA, USA) continuous glucose monitoring system (CGMS) for 48 h before and after 4 months of ET or ST. The ST consisted of three sets at the beginning, increasing to six sets per week at the end of the training period, including all major muscle groups and ET performed with an intensity of maximal oxygen uptake of 60% and a volume beginning at 15 min and advancing to a maximum of 30 min three times a week. Results A total of 17 549 single BG measurements pretraining (619.7 +/- 39.8) and post-training (550.3 +/- 30.1) were recorded, correlating to an average of 585 +/- 25.3 potential measurements per participant at the beginning and at the end of the study. The change in BG-value between the beginning (132 mg dL(-1)) and the end (118 mg dL(-1)) for all participants was significant (P = 0.028). The improvement in BG-value for the ST programme was significant (P = 0.02) but for the ET no significant change was measured (P = 0.48). Glycaemic control improved in the ST group and the mean BG was reduced by 15.6% (Cl 3-25%). Conclusion In conclusion, the CGMS may be a useful tool in monitoring improvements in glycaemic control after different exercise programmes. Additionally, the CGMS may help to identify asymptomatic hypoglycaemia or hyperglycaemia after training programmes.
Resumo:
Hydrogen storage in traditional metallic hydrides can deliver about 1.5 to 2.0 wt pct hydrogen but magnesium hydrides can achieve more than 7 wt pct. However, these systems suffer from high temperature release drawback and chemical instability problems. Recently, big improvements of reducing temperature and increasing kinetics of hydrogenation have been made in nanostructured Mg-based composites. This paper aims to provide an overview of the science and engineering of Mg materials and their nanosized composites with nanostructured carbon for hydrogen storage. The needs in research including preparation of the materials, processing and characterisation and basic mechanisms will be explored. The preliminary experimental results indicated a promising future for chemically stable hydrogen storage using carbon nanotubes modified metal hydrides under lower temperatures.
Resumo:
Among the Solar System’s bodies, Moon, Mercury and Mars are at present, or have been in the recent years, object of space missions aimed, among other topics, also at improving our knowledge about surface composition. Between the techniques to detect planet’s mineralogical composition, both from remote and close range platforms, visible and near-infrared reflectance (VNIR) spectroscopy is a powerful tool, because crystal field absorption bands are related to particular transitional metals in well-defined crystal structures, e.g., Fe2+ in M1 and M2 sites of olivine or pyroxene (Burns, 1993). Thanks to the improvements in the spectrometers onboard the recent missions, a more detailed interpretation of the planetary surfaces can now be delineated. However, quantitative interpretation of planetary surface mineralogy could not always be a simple task. In fact, several factors such as the mineral chemistry, the presence of different minerals that absorb in a narrow spectral range, the regolith with a variable particle size range, the space weathering, the atmosphere composition etc., act in unpredictable ways on the reflectance spectra on a planetary surface (Serventi et al., 2014). One method for the interpretation of reflectance spectra of unknown materials involves the study of a number of spectra acquired in the laboratory under different conditions, such as different mineral abundances or different particle sizes, in order to derive empirical trends. This is the methodology that has been followed in this PhD thesis: the single factors previously listed have been analyzed, creating, in the laboratory, a set of terrestrial analogues with well-defined composition and size. The aim of this work is to provide new tools and criteria to improve the knowledge of the composition of planetary surfaces. In particular, mixtures composed with different content and chemistry of plagioclase and mafic minerals have been spectroscopically analyzed at different particle sizes and with different mineral relative percentages. The reflectance spectra of each mixture have been analyzed both qualitatively (using the software ORIGIN®) and quantitatively applying the Modified Gaussian Model (MGM, Sunshine et al., 1990) algorithm. In particular, the spectral parameter variations of each absorption band have been evaluated versus the volumetric FeO% content in the PL phase and versus the PL modal abundance. This delineated calibration curves of composition vs. spectral parameters and allow implementation of spectral libraries. Furthermore, the trends derived from terrestrial analogues here analyzed and from analogues in the literature have been applied for the interpretation of hyperspectral images of both plagioclase-rich (Moon) and plagioclase-poor (Mars) bodies.
Resumo:
To meet changing needs of customers and to survive in the increasingly globalised and competitive environment, it is necessary for companies to equip themselves with intelligent tools, thereby enabling managerial levels to use the tactical decision in a better way. However, the implementation of an intelligent system is always a challenge in Small- and Medium-sized Enterprises (SMEs). Therefore, a new and simple approach with 'process rethinking' ability is proposed to generate ongoing process improvements over time. In this paper, a roadmap of the development of an agent-based information system is described. A case example has also been provided to show how the system can assist non-specialists, for example, managers and engineers to make right decisions for a continual process improvement. Copyright © 2006 Inderscience Enterprises Ltd.
Resumo:
Many of the recent improvements in the capacity of data cartridge systems have been achieved through the use of narrower tracks, higher linear densities and continuous servo tracking with multi-channel heads. These changes have produced new tribological problems at the head/tape interface. It is crucial that the tribology of such systems is understood and this will continue since increasing storage capacities and faster transfer rates are constantly being sought. Chemical changes in the surface of single and dual layer MP tape have been correlated to signal performance. An accelerated tape tester, consisting of a custom made cycler ("loop tester"), was used to ascertain if results could be produced that were representative of a real tape drive system. A second set of experiments used a modified tape drive (Georgens cycler), which allowed the effects of the tape transport system on the tape surface to be studied. To isolate any effects on the tape surface due to the head/tape interface, read/write heads were not fitted to the cycler. Two further sets of experiments were conducted which included a head in the tape path. This allowed the effects of the head/tape interface on the physical and chemical properties of the head and tape surfaces to be investigated. It was during the final set of experiments that the effect on the head/tape interface, of an energised MR element, was investigated. The effect of operating each cycler at extreme relative humidity and temperature was investigated through the use of an environmental chamber. Extensive use was made of surface specific analytical techniques such as XPS, AFM, AES, and SEM to study the physical and chemical changes that occur at the head/tape interface. Results showed that cycling improved the signal performance of all the tapes tested. The data cartridge drive belt had an effect on the chemical properties of the tape surface on which it was in contact. Also binder degradation occurred for each tape and appeared to be greater at higher humidity. Lubricant was generally seen to migrate to the tape surface with cycling. Any surface changes likely to affect signal output occurred at the head surface rather than the tape.
Resumo:
The rapidly increasing demand for cellular telephony is placing greater demand on the limited bandwidth resources available. This research is concerned with techniques which enhance the capacity of a Direct-Sequence Code-Division-Multiple-Access (DS-CDMA) mobile telephone network. The capacity of both Private Mobile Radio (PMR) and cellular networks are derived and the many techniques which are currently available are reviewed. Areas which may be further investigated are identified. One technique which is developed is the sectorisation of a cell into toroidal rings. This is shown to provide an increased system capacity when the cell is split into these concentric rings and this is compared with cell clustering and other sectorisation schemes. Another technique for increasing the capacity is achieved by adding to the amount of inherent randomness within the transmitted signal so that the system is better able to extract the wanted signal. A system model has been produced for a cellular DS-CDMA network and the results are presented for two possible strategies. One of these strategies is the variation of the chip duration over a signal bit period. Several different variation functions are tried and a sinusoidal function is shown to provide the greatest increase in the maximum number of system users for any given signal-to-noise ratio. The other strategy considered is the use of additive amplitude modulation together with data/chip phase-shift-keying. The amplitude variations are determined by a sparse code so that the average system power is held near its nominal level. This strategy is shown to provide no further capacity since the system is sensitive to amplitude variations. When both strategies are employed, however, the sensitivity to amplitude variations is shown to reduce, thus indicating that the first strategy both increases the capacity and the ability to handle fluctuations in the received signal power.