995 resultados para Self-optimization
Resumo:
Automated Teller Machines (ATMs) are sensitive self-service systems that require important investments in security and testing. ATM certifications are testing processes for machines that integrate software components from different vendors and are performed before their deployment for public use. This project was originated from the need of optimization of the certification process in an ATM manufacturing company. The process identifies compatibility problems between software components through testing. It is composed by a huge number of manual user tasks that makes the process very expensive and error-prone. Moreover, it is not possible to fully automate the process as it requires human intervention for manipulating ATM peripherals. This project presented important challenges for the development team. First, this is a critical process, as all the ATM operations rely on the software under test. Second, the context of use of ATMs applications is vastly different from ordinary software. Third, ATMs’ useful lifetime is beyond 15 years and both new and old models need to be supported. Fourth, the know-how for efficient testing depends on each specialist and it is not explicitly documented. Fifth, the huge number of tests and their importance implies the need for user efficiency and accuracy. All these factors led us conclude that besides the technical challenges, the usability of the intended software solution was critical for the project success. This business context is the motivation of this Master Thesis project. Our proposal focused in the development process applied. By combining user-centered design (UCD) with agile development we ensured both the high priority of usability and the early mitigation of software development risks caused by all the technology constraints. We performed 23 development iterations and finally we were able to provide a working solution on time according to users’ expectations. The evaluation of the project was carried out through usability tests, where 4 real users participated in different tests in the real context of use. The results were positive, according to different metrics: error rate, efficiency, effectiveness, and user satisfaction. We discuss the problems found, the benefits and the lessons learned in the process. Finally, we measured the expected project benefits by comparing the effort required by the current and the new process (once the new software tool is adopted). The savings corresponded to 40% less effort (man-hours) per certification. Future work includes additional evaluation of product usability in a real scenario (with customers) and the measuring of benefits in terms of quality improvement.
Resumo:
Las redes del futuro, incluyendo las redes de próxima generación, tienen entre sus objetivos de diseño el control sobre el consumo de energía y la conectividad de la red. Estos objetivos cobran especial relevancia cuando hablamos de redes con capacidades limitadas, como es el caso de las redes de sensores inalámbricos (WSN por sus siglas en inglés). Estas redes se caracterizan por estar formadas por dispositivos de baja o muy baja capacidad de proceso y por depender de baterías para su alimentación. Por tanto la optimización de la energía consumida se hace muy importante. Son muchas las propuestas que se han realizado para optimizar el consumo de energía en este tipo de redes. Quizás las más conocidas son las que se basan en la planificación coordinada de periodos de actividad e inactividad, siendo una de las formas más eficaces para extender el tiempo de vida de las baterías. La propuesta que se presenta en este trabajo se basa en el control de la conectividad mediante una aproximación probabilística. La idea subyacente es que se puede esperar que una red mantenga la conectividad si todos sus nodos tienen al menos un número determinado de vecinos. Empleando algún mecanismo que mantenga ese número, se espera que se pueda mantener la conectividad con un consumo energético menor que si se empleara una potencia de transmisión fija que garantizara una conectividad similar. Para que el mecanismo sea eficiente debe tener la menor huella posible en los dispositivos donde se vaya a emplear. Por eso se propone el uso de un sistema auto-adaptativo basado en control mediante lógica borrosa. En este trabajo se ha diseñado e implementado el sistema descrito, y se ha probado en un despliegue real confirmando que efectivamente existen configuraciones posibles que permiten mantener la conectividad ahorrando energía con respecto al uso de una potencia de transmisión fija. ABSTRACT. Among the design goals for future networks, including next generation networks, we can find the energy consumption and the connectivity. These two goals are of special relevance when dealing with constrained networks. That is the case of Wireless Sensors Networks (WSN). These networks consist of devices with low or very low processing capabilities. They also depend on batteries for their operation. Thus energy optimization becomes a very important issue. Several proposals have been made for optimizing the energy consumption in this kind of networks. Perhaps the best known are those based on the coordinated planning of active and sleep intervals. They are indeed one of the most effective ways to extend the lifetime of the batteries. The proposal presented in this work uses a probabilistic approach to control the connectivity of a network. The underlying idea is that it is highly probable that the network will have a good connectivity if all the nodes have a minimum number of neighbors. By using some mechanism to reach that number, we hope that we can preserve the connectivity with a lower energy consumption compared to the required one if a fixed transmission power is used to achieve a similar connectivity. The mechanism must have the smallest footprint possible on the devices being used in order to be efficient. Therefore a fuzzy control based self-adaptive system is proposed. This work includes the design and implementation of the described system. It also has been validated in a real scenario deployment. We have obtained results supporting that there exist configurations where it is possible to get a good connectivity saving energy when compared to the use of a fixed transmission power for a similar connectivity.
Resumo:
The emergence of new horizons in the field of travel assistant management leads to the development of cutting-edge systems focused on improving the existing ones. Moreover, new opportunities are being also presented since systems trend to be more reliable and autonomous. In this paper, a self-learning embedded system for object identification based on adaptive-cooperative dynamic approaches is presented for intelligent sensor’s infrastructures. The proposed system is able to detect and identify moving objects using a dynamic decision tree. Consequently, it combines machine learning algorithms and cooperative strategies in order to make the system more adaptive to changing environments. Therefore, the proposed system may be very useful for many applications like shadow tolls since several types of vehicles may be distinguished, parking optimization systems, improved traffic conditions systems, etc.
Resumo:
The increasing economic competition drives the industry to implement tools that improve their processes efficiencies. The process automation is one of these tools, and the Real Time Optimization (RTO) is an automation methodology that considers economic aspects to update the process control in accordance with market prices and disturbances. Basically, RTO uses a steady-state phenomenological model to predict the process behavior, and then, optimizes an economic objective function subject to this model. Although largely implemented in industry, there is not a general agreement about the benefits of implementing RTO due to some limitations discussed in the present work: structural plant/model mismatch, identifiability issues and low frequency of set points update. Some alternative RTO approaches have been proposed in literature to handle the problem of structural plant/model mismatch. However, there is not a sensible comparison evaluating the scope and limitations of these RTO approaches under different aspects. For this reason, the classical two-step method is compared to more recently derivative-based methods (Modifier Adaptation, Integrated System Optimization and Parameter estimation, and Sufficient Conditions of Feasibility and Optimality) using a Monte Carlo methodology. The results of this comparison show that the classical RTO method is consistent, providing a model flexible enough to represent the process topology, a parameter estimation method appropriate to handle measurement noise characteristics and a method to improve the sample information quality. At each iteration, the RTO methodology updates some key parameter of the model, where it is possible to observe identifiability issues caused by lack of measurements and measurement noise, resulting in bad prediction ability. Therefore, four different parameter estimation approaches (Rotational Discrimination, Automatic Selection and Parameter estimation, Reparametrization via Differential Geometry and classical nonlinear Least Square) are evaluated with respect to their prediction accuracy, robustness and speed. The results show that the Rotational Discrimination method is the most suitable to be implemented in a RTO framework, since it requires less a priori information, it is simple to be implemented and avoid the overfitting caused by the Least Square method. The third RTO drawback discussed in the present thesis is the low frequency of set points update, this problem increases the period in which the process operates at suboptimum conditions. An alternative to handle this problem is proposed in this thesis, by integrating the classic RTO and Self-Optimizing control (SOC) using a new Model Predictive Control strategy. The new approach demonstrates that it is possible to reduce the problem of low frequency of set points updates, improving the economic performance. Finally, the practical aspects of the RTO implementation are carried out in an industrial case study, a Vapor Recompression Distillation (VRD) process located in Paulínea refinery from Petrobras. The conclusions of this study suggest that the model parameters are successfully estimated by the Rotational Discrimination method; the RTO is able to improve the process profit in about 3%, equivalent to 2 million dollars per year; and the integration of SOC and RTO may be an interesting control alternative for the VRD process.
Resumo:
The parameterless self-organizing map (PLSOM) is a new neural network algorithm based on the self-organizing map (SOM). It eliminates the need for a learning rate and annealing schemes for learning rate and neighborhood size. We discuss the relative performance of the PLSOM and the SOM and demonstrate some tasks in which the SOM fails but the PLSOM performs satisfactory. Finally we discuss some example applications of the PLSOM and present a proof of ordering under certain limited conditions.
Resumo:
When composing stock portfolios, managers frequently choose among hundreds of stocks. The stocks' risk properties are analyzed with statistical tools, and managers try to combine these to meet the investors' risk profiles. A recently developed tool for performing such optimization is called full-scale optimization (FSO). This methodology is very flexible for investor preferences, but because of computational limitations it has until now been infeasible to use when many stocks are considered. We apply the artificial intelligence technique of differential evolution to solve FSO-type stock selection problems of 97 assets. Differential evolution finds the optimal solutions by self-learning from randomly drawn candidate solutions. We show that this search technique makes large scale problem computationally feasible and that the solutions retrieved are stable. The study also gives further merit to the FSO technique, as it shows that the solutions suit investor risk profiles better than portfolios retrieved from traditional methods.
Resumo:
This paper addresses the problem of obtaining 3d detailed reconstructions of human faces in real-time and with inexpensive hardware. We present an algorithm based on a monocular multi-spectral photometric-stereo setup. This system is known to capture high-detailed deforming 3d surfaces at high frame rates and without having to use any expensive hardware or synchronized light stage. However, the main challenge of such a setup is the calibration stage, which depends on the lights setup and how they interact with the specific material being captured, in this case, human faces. For this purpose we develop a self-calibration technique where the person being captured is asked to perform a rigid motion in front of the camera, maintaining a neutral expression. Rigidity constrains are then used to compute the head's motion with a structure-from-motion algorithm. Once the motion is obtained, a multi-view stereo algorithm reconstructs a coarse 3d model of the face. This coarse model is then used to estimate the lighting parameters with a stratified approach: In the first step we use a RANSAC search to identify purely diffuse points on the face and to simultaneously estimate this diffuse reflectance model. In the second step we apply non-linear optimization to fit a non-Lambertian reflectance model to the outliers of the previous step. The calibration procedure is validated with synthetic and real data.
Resumo:
Background: Parkinson’s disease (PD) is an incurable neurological disease with approximately 0.3% prevalence. The hallmark symptom is gradual movement deterioration. Current scientific consensus about disease progression holds that symptoms will worsen smoothly over time unless treated. Accurate information about symptom dynamics is of critical importance to patients, caregivers, and the scientific community for the design of new treatments, clinical decision making, and individual disease management. Long-term studies characterize the typical time course of the disease as an early linear progression gradually reaching a plateau in later stages. However, symptom dynamics over durations of days to weeks remains unquantified. Currently, there is a scarcity of objective clinical information about symptom dynamics at intervals shorter than 3 months stretching over several years, but Internet-based patient self-report platforms may change this. Objective: To assess the clinical value of online self-reported PD symptom data recorded by users of the health-focused Internet social research platform PatientsLikeMe (PLM), in which patients quantify their symptoms on a regular basis on a subset of the Unified Parkinson’s Disease Ratings Scale (UPDRS). By analyzing this data, we aim for a scientific window on the nature of symptom dynamics for assessment intervals shorter than 3 months over durations of several years. Methods: Online self-reported data was validated against the gold standard Parkinson’s Disease Data and Organizing Center (PD-DOC) database, containing clinical symptom data at intervals greater than 3 months. The data were compared visually using quantile-quantile plots, and numerically using the Kolmogorov-Smirnov test. By using a simple piecewise linear trend estimation algorithm, the PLM data was smoothed to separate random fluctuations from continuous symptom dynamics. Subtracting the trends from the original data revealed random fluctuations in symptom severity. The average magnitude of fluctuations versus time since diagnosis was modeled by using a gamma generalized linear model. Results: Distributions of ages at diagnosis and UPDRS in the PLM and PD-DOC databases were broadly consistent. The PLM patients were systematically younger than the PD-DOC patients and showed increased symptom severity in the PD off state. The average fluctuation in symptoms (UPDRS Parts I and II) was 2.6 points at the time of diagnosis, rising to 5.9 points 16 years after diagnosis. This fluctuation exceeds the estimated minimal and moderate clinically important differences, respectively. Not all patients conformed to the current clinical picture of gradual, smooth changes: many patients had regimes where symptom severity varied in an unpredictable manner, or underwent large rapid changes in an otherwise more stable progression. Conclusions: This information about short-term PD symptom dynamics contributes new scientific understanding about the disease progression, currently very costly to obtain without self-administered Internet-based reporting. This understanding should have implications for the optimization of clinical trials into new treatments and for the choice of treatment decision timescales.
Resumo:
Optimization of design, creation, functioning and accompaniment processes of expert system is the important problem of artificial intelligence theory and decisions making methods techniques. In this paper the approach to its solving with the use of technology, being based on methodology of systems analysis, ontology of subject domain, principles and methods of self-organisation, is offered. The aspects of such approach realization, being based on construction of accordance between the ontology hierarchical structure and sequence of questions in automated systems for examination, are expounded.
Resumo:
The Mobile Network Optimization (MNO) technologies have advanced at a tremendous pace in recent years. And the Dynamic Network Optimization (DNO) concept emerged years ago, aimed to continuously optimize the network in response to variations in network traffic and conditions. Yet, DNO development is still at its infancy, mainly hindered by a significant bottleneck of the lengthy optimization runtime. This paper identifies parallelism in greedy MNO algorithms and presents an advanced distributed parallel solution. The solution is designed, implemented and applied to real-life projects whose results yield a significant, highly scalable and nearly linear speedup up to 6.9 and 14.5 on distributed 8-core and 16-core systems respectively. Meanwhile, optimization outputs exhibit self-consistency and high precision compared to their sequential counterpart. This is a milestone in realizing the DNO. Further, the techniques may be applied to similar greedy optimization algorithm based applications.
Resumo:
Flow experience, a holistic sensation of total involvement in an activity, seems to have positive influences on musical performance activities. Although its main requirements (balance between challenges and skills, clear goals and unequivocal feedback) are inherent elements of musical practice, there is a lack of research about flow occurrences in the context of musical practice and on how specific practice behaviours affect the experience of flow and its particular dimensions. The aims of this thesis were to investigate advanced performersʼ dispositions to flow in musical practice, and to investigate whether the frequency of these experiences of holistic engagement with practice are associated with self-regulatory practice behaviours. 168 advanced classicallytrained performers (male = 50.0%; female = 50.0%), ranging in age from 18 to 74 years (m = 34.41, SD = 12.39), answered a survey that included two measures: the Dispositional Short Flow Scale, assessing performersʼ flow dispositions, and the Self-Regulated Practice Behaviours Questionnaire, developed specifically for the present research. The overall results of the survey suggested that advanced musicians have high dispositions to flow in musical practice, but not associated with the participantsʼ demographic characteristics. Three of the individual flow indicators were less experienced, suggesting that the most intense flow experiences are rare in musical practice. However, the results point to the existence of another relevant experience, named optimal practice experience. Practice engagement levels were positively associated with knowledge of oneʼs own personal resources and a capacity for practice organization, but not with inclusion/use of external resources. A capacity for setting optimal practice goals was related to self-regulation and to immersion aspects of flow. Current findings offer new clues about the assessment of flow dispositions in performers, helping to clarify how daily practice can heighten positive affective responses in musicians who are vulnerable to the requirements and difficulties of deliberate practice, as well as to other negative practice outcomes. The current research questions issues pertaining to the optimization and sustaining of flow in daily practice, suggesting future directions in the study of the affective subjective functioning of engagement with deliberate practice.
Resumo:
The aim of this study was to optimize the aqueous extraction conditions for the recovery of phenolic compounds and antioxidant capacity of lemon pomace using response surface methodology. An experiment based on Box–Behnken design was conducted to analyse the effects of temperature, time and sample-to-water ratio on the extraction of total phenolic compounds, total flavonoids, proanthocyanidins and antioxidant capacity. Sample-to-solvent ratio had a negative effect on all the dependent variables, while extraction temperature and time had a positive effect only on TPC yields and ABTS antioxidant capacity. The optimal extraction conditions were 95 oC, 15 min, and a sample-to-solvent ratio of 1:100 g/ml. Under these conditions, the aqueous extracts had the same content of TPC and TF as well as antioxidant capacity in comparison with those of methanol extracts obtained by sonication. Therefore these conditions could be applied for further extraction and isolation of phenolic compounds from lemon pomace.
Resumo:
The work presented herein focused on the automation of coordination-driven self assembly, exploring methods that allow syntheses to be followed more closely while forming new ligands, as part of the fundamental study of the digitization of chemical synthesis and discovery. Whilst the control and understanding of the principle of pre-organization and self-sorting under non-equilibrium conditions remains a key goal, a clear gap has been identified in the absence of approaches that can permit fast screening and real-time observation of the reaction process under different conditions. A firm emphasis was thus placed on the realization of an autonomous chemical robot, which can not only monitor and manipulate coordination chemistry in real-time, but can also allow the exploration of a large chemical parameter space defined by the ligand building blocks and the metal to coordinate. The self-assembly of imine ligands with copper and nickel cations has been studied in a multi-step approach using a self-built flow system capable of automatically controlling the liquid-handling and collecting data in real-time using a benchtop MS and NMR spectrometer. This study led to the identification of a transient Cu(I) species in situ which allows for the formation of dimeric and trimeric carbonato bridged Cu(II) assemblies. Furthermore, new Ni(II) complexes and more remarkably also a new binuclear Cu(I) complex, which usually requires long and laborious inert conditions, could be isolated. The study was then expanded to the autonomous optimization of the ligand synthesis by enabling feedback control on the chemical system via benchtop NMR. The synthesis of new polydentate ligands has emerged as a result of the study aiming to enhance the complexity of the chemical system to accelerate the discovery of new complexes. This type of ligand consists of 1-pyridinyl-4-imino-1,2,3-triazole units, which can coordinate with different metal salts. The studies to test for the CuAAC synthesis via microwave lead to the discovery of four new Cu complexes, one of them being a coordination polymer obtained from a solvent dependent crystallization technique. With the goal of easier integration into an automated system, copper tubing has been exploited as the chemical reactor for the synthesis of this ligand, as it efficiently enhances the rate of the triazole formation and consequently promotes the formation of the full ligand in high yields within two hours. Lastly, the digitization of coordination-driven self-assembly has been realized for the first time using an in-house autonomous chemical robot, herein named the ‘Finder’. The chemical parameter space to explore was defined by the selection of six variables, which consist of the ligand precursors necessary to form complex ligands (aldehydes, alkineamines and azides), of the metal salt solutions and of other reaction parameters – duration, temperature and reagent volumes. The platform was assembled using rounded bottom flasks, flow syringe pumps, copper tubing, as an active reactor, and in-line analytics – a pH meter probe, a UV-vis flow cell and a benchtop MS. The control over the system was then obtained with an algorithm capable of autonomously focusing the experiments on the most reactive region (by avoiding areas of low interest) of the chemical parameter space to explore. This study led to interesting observations, such as metal exchange phenomena, and also to the autonomous discovery of self assembled structures in solution and solid state – such as 1-pyridinyl-4-imino-1,2,3-triazole based Fe complexes and two helicates based on the same ligand coordination motif.
Resumo:
Póster presentado en: 21st World Hydrogen Energy Conference 2016. Zaragoza, Spain. 13-16th June, 2016
Resumo:
The paper presents an investigation of fix-referenced and self-referenced wave energy converters and a comparison of their corresponding wave energy conversion capacities from real seas. For conducting the comparisons, two popular wave energy converters, point absorber and oscillating water column, and their power conversion capacities in the fixed-referenced and self-referenced forms have been numerically studied and compared. In the numerical models, the deviceâ s power extractions from seas are maximized using the correspondingly optimized power take-offs in different sea states, thus their power conversion capacities can be calculated and compared. From the comparisons and analyses, it is shown that the energy conversion capacities of the self-referenced devices can be significantly increased if the motions of the device itself can be utilized for wave energy conversion; and the self-referenced devices can be possibly designed to be compliant in long waves, which could be a very beneficial factor for device survivability in the extreme wave conditions (normally long waves). In this regards, the self-referenced WECs (wave energy converters) may be better options in terms of wave energy conversion from the targeted waves in seas (frequently the most occurred), and in terms of the device survivability, especially in the extreme waves when compared to the fix-referenced counterparts.