927 resultados para Multi-objective functions
Resumo:
To analyze the characteristics and predict the dynamic behaviors of complex systems over time, comprehensive research to enable the development of systems that can intelligently adapt to the evolving conditions and infer new knowledge with algorithms that are not predesigned is crucially needed. This dissertation research studies the integration of the techniques and methodologies resulted from the fields of pattern recognition, intelligent agents, artificial immune systems, and distributed computing platforms, to create technologies that can more accurately describe and control the dynamics of real-world complex systems. The need for such technologies is emerging in manufacturing, transportation, hazard mitigation, weather and climate prediction, homeland security, and emergency response. Motivated by the ability of mobile agents to dynamically incorporate additional computational and control algorithms into executing applications, mobile agent technology is employed in this research for the adaptive sensing and monitoring in a wireless sensor network. Mobile agents are software components that can travel from one computing platform to another in a network and carry programs and data states that are needed for performing the assigned tasks. To support the generation, migration, communication, and management of mobile monitoring agents, an embeddable mobile agent system (Mobile-C) is integrated with sensor nodes. Mobile monitoring agents visit distributed sensor nodes, read real-time sensor data, and perform anomaly detection using the equipped pattern recognition algorithms. The optimal control of agents is achieved by mimicking the adaptive immune response and the application of multi-objective optimization algorithms. The mobile agent approach provides potential to reduce the communication load and energy consumption in monitoring networks. The major research work of this dissertation project includes: (1) studying effective feature extraction methods for time series measurement data; (2) investigating the impact of the feature extraction methods and dissimilarity measures on the performance of pattern recognition; (3) researching the effects of environmental factors on the performance of pattern recognition; (4) integrating an embeddable mobile agent system with wireless sensor nodes; (5) optimizing agent generation and distribution using artificial immune system concept and multi-objective algorithms; (6) applying mobile agent technology and pattern recognition algorithms for adaptive structural health monitoring and driving cycle pattern recognition; (7) developing a web-based monitoring network to enable the visualization and analysis of real-time sensor data remotely. Techniques and algorithms developed in this dissertation project will contribute to research advances in networked distributed systems operating under changing environments.
Resumo:
A servo-controlled automatic machine can perform tasks that involve synchronized actuation of a significant number of servo-axes, namely one degree-of-freedom (DoF) electromechanical actuators. Each servo-axis comprises a servo-motor, a mechanical transmission and an end-effector, and is responsible for generating the desired motion profile and providing the power required to achieve the overall task. The design of a such a machine must involve a detailed study from a mechatronic viewpoint, due to its electric and mechanical nature. The first objective of this thesis is the development of an overarching electromechanical model for a servo-axis. Every loss source is taken into account, be it mechanical or electrical. The mechanical transmission is modeled by means of a sequence of lumped-parameter blocks. The electric model of the motor and the inverter takes into account winding losses, iron losses and controller switching losses. No experimental characterizations are needed to implement the electric model, since the parameters are inferred from the data available in commercial catalogs. With the global model at disposal, a second objective of this work is to perform the optimization analysis, in particular, the selection of the motor-reducer unit. The optimal transmission ratios that minimize several objective functions are found. An optimization process is carried out and repeated for each candidate motor. Then, we present a novel method where the discrete set of available motor is extended to a continuous domain, by fitting manufacturer data. The problem becomes a two-dimensional nonlinear optimization subject to nonlinear constraints, and the solution gives the optimal choice for the motor-reducer system. The presented electromechanical model, along with the implementation of optimization algorithms, forms a complete and powerful simulation tool for servo-controlled automatic machines. The tool allows for determining a wide range of electric and mechanical parameters and the behavior of the system in different operating conditions.
Resumo:
We address the problem of automotive cybersecurity from the point of view of Threat Analysis and Risk Assessment (TARA). The central question that motivates the thesis is the one about the acceptability of risk, which is vital in taking a decision about the implementation of cybersecurity solutions. For this purpose, we develop a quantitative framework in which we take in input the results of risk assessment and define measures of various facets of a possible risk response; we then exploit the natural presence of trade-offs (cost versus effectiveness) to formulate the problem as a multi-objective optimization. Finally, we develop a stochastic model of the future evolution of the risk factors, by means of Markov chains; we adapt the formulations of the optimization problems to this non-deterministic context. The thesis is the result of a collaboration with the Vehicle Electrification division of Marelli, in particular with the Cybersecurity team based in Bologna; this allowed us to consider a particular instance of the problem, deriving from a real TARA, in order to test both the deterministic and the stochastic framework in a real world application. The collaboration also explains why in the work we often assume the point of view of a tier-1 supplier; however, the analyses performed can be adapted to any other level of the supply chain.
Resumo:
The Three-Dimensional Single-Bin-Size Bin Packing Problem is one of the most studied problem in the Cutting & Packing category. From a strictly mathematical point of view, it consists of packing a finite set of strongly heterogeneous “small” boxes, called items, into a finite set of identical “large” rectangles, called bins, minimizing the unused volume and requiring that the items are packed without overlapping. The great interest is mainly due to the number of real-world applications in which it arises, such as pallet and container loading, cutting objects out of a piece of material and packaging design. Depending on these real-world applications, more objective functions and more practical constraints could be needed. After a brief discussion about the real-world applications of the problem and a exhaustive literature review, the design of a two-stage algorithm to solve the aforementioned problem is presented. The algorithm must be able to provide the spatial coordinates of the placed boxes vertices and also the optimal boxes input sequence, while guaranteeing geometric, stability, fragility constraints and a reduced computational time. Due to NP-hard complexity of this type of combinatorial problems, a fusion of metaheuristic and machine learning techniques is adopted. In particular, a hybrid genetic algorithm coupled with a feedforward neural network is used. In the first stage, a rich dataset is created starting from a set of real input instances provided by an industrial company and the feedforward neural network is trained on it. After its training, given a new input instance, the hybrid genetic algorithm is able to run using the neural network output as input parameter vector, providing as output the optimal solution. The effectiveness of the proposed works is confirmed via several experimental tests.
Resumo:
Several decision and control tasks involve networks of cyber-physical systems that need to be coordinated and controlled according to a fully-distributed paradigm involving only local communications without any central unit. This thesis focuses on distributed optimization and games over networks from a system theoretical perspective. In the addressed frameworks, we consider agents communicating only with neighbors and running distributed algorithms with optimization-oriented goals. The distinctive feature of this thesis is to interpret these algorithms as dynamical systems and, thus, to resort to powerful system theoretical tools for both their analysis and design. We first address the so-called consensus optimization setup. In this context, we provide an original system theoretical analysis of the well-known Gradient Tracking algorithm in the general case of nonconvex objective functions. Then, inspired by this method, we provide and study a series of extensions to improve the performance and to deal with more challenging settings like, e.g., the derivative-free framework or the online one. Subsequently, we tackle the recently emerged framework named distributed aggregative optimization. For this setup, we develop and analyze novel schemes to handle (i) online instances of the problem, (ii) ``personalized'' optimization frameworks, and (iii) feedback optimization settings. Finally, we adopt a system theoretical approach to address aggregative games over networks both in the presence or absence of linear coupling constraints among the decision variables of the players. In this context, we design and inspect novel fully-distributed algorithms, based on tracking mechanisms, that outperform state-of-the-art methods in finding the Nash equilibrium of the game.
Resumo:
Nowadays, electrical machines are seeing an ever-increasing development and extensive research is currently being dedicated to the improvement of their efficiency and torque/power density. Compared to conventional random windings, hairpin winding inherently features lower DC resistance, higher fill factor, better thermal performance, improved reliability, and an automated manufacturing process. However, several challenges need to be addressed, including electromagnetic, thermal, and manufacturing aspects. Of these, the high ohmic losses at high-frequency operations due to skin and proximity effects are the most severe, resulting in low efficiency or high-temperature values. In this work, the hairpin winding challenges were highlighted at high-frequency operations and at showing the limits of applicability of these standard approaches. Afterward, a multi-objective design optimization is proposed aiming to enhance the exploitation of the hairpin technology in electrical machines. Efficiency and volume power density are considered as main design objectives. Subsequently, a changing paradigm is made for the design of electric motors equipped with hairpin windings, where it is proven that a temperature-oriented approach would be beneficial when designing this type of pre-formed winding. Furthermore, the effect of the rotor topology on AC losses is also considered. After providing design recommendations and FE electromagnetic and thermal evaluations, experimental tests are also performed for validation purposes on a motorette wound with pre-formed conductors. The results show that operating the machine at higher temperatures could be beneficial to efficiency, particularly in high-frequency operations where AC losses are higher at low operating temperatures. The last part of the thesis focuses on comparing the main electromagnetic performance metrics for a conventional hairpin winding, wound onto a benchmark stator with a semi-closed slot opening design, and a continuous hairpin winding, in which the slot opening is open. Lastly, the adoption of semi-magnetic slot wedges is investigated to improve the overall performance of the motor.
Resumo:
Riding the wave of recent groundbreaking achievements, artificial intelligence (AI) is currently the buzzword on everybody’s lips and, allowing algorithms to learn from historical data, Machine Learning (ML) emerged as its pinnacle. The multitude of algorithms, each with unique strengths and weaknesses, highlights the absence of a universal solution and poses a challenging optimization problem. In response, automated machine learning (AutoML) navigates vast search spaces within minimal time constraints. By lowering entry barriers, AutoML emerged as promising the democratization of AI, yet facing some challenges. In data-centric AI, the discipline of systematically engineering data used to build an AI system, the challenge of configuring data pipelines is rather simple. We devise a methodology for building effective data pre-processing pipelines in supervised learning as well as a data-centric AutoML solution for unsupervised learning. In human-centric AI, many current AutoML tools were not built around the user but rather around algorithmic ideas, raising ethical and social bias concerns. We contribute by deploying AutoML tools aiming at complementing, instead of replacing, human intelligence. In particular, we provide solutions for single-objective and multi-objective optimization and showcase the challenges and potential of novel interfaces featuring large language models. Finally, there are application areas that rely on numerical simulators, often related to earth observations, they tend to be particularly high-impact and address important challenges such as climate change and crop life cycles. We commit to coupling these physical simulators with (Auto)ML solutions towards a physics-aware AI. Specifically, in precision farming, we design a smart irrigation platform that: allows real-time monitoring of soil moisture, predicts future moisture values, and estimates water demand to schedule the irrigation.
Resumo:
The objective of this study was to estimate (co)variance functions using random regression models on Legendre polynomials for the analysis of repeated measures of BW from birth to adult age. A total of 82,064 records from 8,145 females were analyzed. Different models were compared. The models included additive direct and maternal effects, and animal and maternal permanent environmental effects as random terms. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of animal age (cubic regression) were considered as random co-variables. Eight models with polynomials of third to sixth order were used to describe additive direct and maternal effects, and animal and maternal permanent environmental effects. Residual effects were modeled using 1 (i.e., assuming homogeneity of variances across all ages) or 5 age classes. The model with 5 classes was the best to describe the trajectory of residuals along the growth curve. The model including fourth- and sixth-order polynomials for additive direct and animal permanent environmental effects, respectively, and third-order polynomials for maternal genetic and maternal permanent environmental effects were the best. Estimates of (co) variance obtained with the multi-trait and random regression models were similar. Direct heritability estimates obtained with the random regression models followed a trend similar to that obtained with the multi-trait model. The largest estimates of maternal heritability were those of BW taken close to 240 d of age. In general, estimates of correlation between BW from birth to 8 yr of age decreased with increasing distance between ages.
Resumo:
A Wiener system is a linear time-invariant filter, followed by an invertible nonlinear distortion. Assuming that the input signal is an independent and identically distributed (iid) sequence, we propose an algorithm for estimating the input signal only by observing the output of the Wiener system. The algorithm is based on minimizing the mutual information of the output samples, by means of a steepest descent gradient approach.
Resumo:
In the network era, creative achievements like innovations are more and more often created in interaction among different actors. The complexity of today‘s problems transcends the individual human mind, requiring not only individual but also collective creativity. In collective creativity, it is impossible to trace the source of new ideas to an individual. Instead, creative activity emerges from the collaboration and contribution of many individuals, thereby blurring the contribution of specific individuals in creating ideas. Collective creativity is often associated with diversity of knowledge, skills, experiences and perspectives. Collaboration between diverse actors thus triggers creativity and gives possibilities for collective creativity. This dissertation investigates collective creativity in the context of practice-based innovation. Practice-based innovation processes are triggered by problem setting in a practical context and conducted in non-linear processes utilising scientific and practical knowledge production and creation in cross-disciplinary innovation networks. In these networks diversity or distances between innovation actors are essential. Innovation potential may be found in exploiting different kinds of distances. This dissertation presents different kinds of distances, such as cognitive, functional and organisational which could be considered as sources of creativity and thus innovation. However, formation and functioning of these kinds of innovation networks can be problematic. Distances between innovating actors may be so great that a special interpretation function is needed – that is, brokerage. This dissertation defines factors that enhance collective creativity in practice-based innovation and especially in the fuzzy front end phase of innovation processes. The first objective of this dissertation is to study individual and collective creativity at the employee level and identify those factors that support individual and collective creativity in the organisation. The second objective is to study how organisations use external knowledge to support collective creativity in their innovation processes in open multi-actor innovation. The third objective is to define how brokerage functions create possibilities for collective creativity especially in the context of practice-based innovation. The research objectives have been studied through five substudies using a case-study strategy. Each substudy highlights various aspects of creativity and collective creativity. The empirical data consist of materials from innovation projects arranged in the Lahti region, Finland, or materials from the development of innovation methods in the Lahti region. The Lahti region has been chosen as the research context because the innovation policy of the region emphasises especially the promotion of practice-based innovations. The results of this dissertation indicate that all possibilities of collective creativity are not utilised in internal operations of organisations. The dissertation introduces several factors that could support collective creativity in organisations. However, creativity as a social construct is understood and experienced differently in different organisations, and these differences should be taken into account when supporting creativity in organisations. The increasing complexity of most potential innovations requires collaborative creative efforts that often exceed the boundaries of the organisation and call for the involvement of external expertise. In practice-based innovation different distances are considered as sources of creativity. This dissertation gives practical implications on how it is possible to exploit different kinds of distances knowingly. It underlines especially the importance of brokerage functions in open, practice-based innovation in order to create possibilities for collective creativity. As a contribution of this dissertation, a model of brokerage functions in practice-based innovation is formulated. According to the model, the results and success of brokerage functions are based on the context of brokerage as well as the roles, tasks, skills and capabilities of brokers. The brokerage functions in practice-based innovation are also possible to divide into social and cognitive brokerage.
Characterizing Dynamic Optimization Benchmarks for the Comparison of Multi-Modal Tracking Algorithms
Resumo:
Population-based metaheuristics, such as particle swarm optimization (PSO), have been employed to solve many real-world optimization problems. Although it is of- ten sufficient to find a single solution to these problems, there does exist those cases where identifying multiple, diverse solutions can be beneficial or even required. Some of these problems are further complicated by a change in their objective function over time. This type of optimization is referred to as dynamic, multi-modal optimization. Algorithms which exploit multiple optima in a search space are identified as niching algorithms. Although numerous dynamic, niching algorithms have been developed, their performance is often measured solely on their ability to find a single, global optimum. Furthermore, the comparisons often use synthetic benchmarks whose landscape characteristics are generally limited and unknown. This thesis provides a landscape analysis of the dynamic benchmark functions commonly developed for multi-modal optimization. The benchmark analysis results reveal that the mechanisms responsible for dynamism in the current dynamic bench- marks do not significantly affect landscape features, thus suggesting a lack of representation for problems whose landscape features vary over time. This analysis is used in a comparison of current niching algorithms to identify the effects that specific landscape features have on niching performance. Two performance metrics are proposed to measure both the scalability and accuracy of the niching algorithms. The algorithm comparison results demonstrate the algorithms best suited for a variety of dynamic environments. This comparison also examines each of the algorithms in terms of their niching behaviours and analyzing the range and trade-off between scalability and accuracy when tuning the algorithms respective parameters. These results contribute to the understanding of current niching techniques as well as the problem features that ultimately dictate their success.
Resumo:
L’examen de la rétine par des moyens non invasifs et in vivo a été un objectif de recherche pendant plusieurs années. Pour l’œil comme pour tous les organes du corps humain, un apport soutenu en oxygène est nécessaire pour le maintien de l’homéostasie. La concentration en oxygène du sang des vaisseaux rétiniens peut être déterminée principalement à partir des mesures du spectre de réflexion du fond de l’œil. En envoyant une lumière, à différentes longueurs d’onde, sur la rétine et en analysant la nature de la lumière réfléchie par la rétine, il est possible d’obtenir des informations quantitatives sur le niveau d'oxygène dans les vaisseaux sanguins de la rétine ou sur le flux sanguin. Cependant, la modélisation est compliquée due aux différentes interactions et aux chemins que la lumière prend à travers les tissus oculaires avant de quitter l’œil. L’objectif de cette thèse a été de développer et de valider un modèle mathématique afin de calculer les dérivées d’hémoglobine à partir de mesures spectrales de réflectométrie sur les vaisseaux sanguins de la rétine. L’instrument utilisé pour mesurer la fonction spectrale de réflectométrie a été un spectroréflectomètre multi-canal, une technologie capable de mesurer in vivo et en continu 800 spectres simultanément. L'équation mathématique qui décrit la fonction spectrale de réflectométrie dans la zone spectrale de 480 nm à 650 nm a été exprimée comme la combinaison linéaire de plusieurs termes représentant les signatures spectrales de l'hémoglobine SHb, de l'oxyhémoglobine SOHB, l’absorption et la diffusion des milieux oculaires et une famille de fonctions multigaussiennes utilisées pour compenser l’incompatibilité du modèle et les données expérimentales dans la zone rouge du spectre. Les résultats du modèle révèlent que le signal spectral obtenu à partir de mesures de réflectométrie dans l’œil est complexe, contenant la lumière absorbée, réfléchie et diffusée, mais chacun avec une certaine prédominance spécifique en fonction de la zone spectrale. La fonction spectrale d’absorption du sang est dominante dans la zone spectrale 520 à 580 nm, tandis que dans la zone spectrale de longueurs d’ondes plus grandes que 590 nm, la diffusion sur les cellules rouges du sang est dominante. Le modèle a été utilisé afin de mesurer la concentration d’oxygène dans les capillaires de la tête du nerf optique suite à un effort physique dynamique. L’effort physique a entraîné une réduction de la concentration d’oxygène dans les capillaires, ainsi qu’une réduction de la pression intraoculaire, tandis que la saturation sanguine en oxygène, mesurée au niveau du doigt, restait constante. Le modèle mathématique développé dans ce projet a ainsi permis, avec la technique novatrice de spectroréflectométrie multicanal, de déterminer in vivo et d’une manière non invasive l’oxygénation sanguine des vaisseaux rétiniens.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The objective of this study was to estimate (co)variance components using random regression on B-spline functions to weight records obtained from birth to adulthood. A total of 82 064 weight records of 8145 females obtained from the data bank of the Nellore Breeding Program (PMGRN/Nellore Brazil) which started in 1987, were used. The models included direct additive and maternal genetic effects and animal and maternal permanent environmental effects as random. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of age (cubic regression) were considered as random covariate. The random effects were modeled using B-spline functions considering linear, quadratic and cubic polynomials for each individual segment. Residual variances were grouped in five age classes. Direct additive genetic and animal permanent environmental effects were modeled using up to seven knots (six segments). A single segment with two knots at the end points of the curve was used for the estimation of maternal genetic and maternal permanent environmental effects. A total of 15 models were studied, with the number of parameters ranging from 17 to 81. The models that used B-splines were compared with multi-trait analyses with nine weight traits and to a random regression model that used orthogonal Legendre polynomials. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most appropriate and parsimonious model to describe the covariance structure of the data. Selection for higher weight, such as at young ages, should be performed taking into account an increase in mature cow weight. Particularly, this is important in most of Nellore beef cattle production systems, where the cow herd is maintained on range conditions. There is limited modification of the growth curve of Nellore cattle with respect to the aim of selecting them for rapid growth at young ages while maintaining constant adult weight.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)