928 resultados para cost-aware process design


Relevância:

40.00% 40.00%

Publicador:

Resumo:

A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The following thesis navigates the primary artistic concept, design process and execution of Marchlena Rodgers’ costume design for the University of Maryland’s production of Intimate Apparel. Intimate Apparel opened October 9, 2015 in the University of Maryland’s Kay Theatre. The piece was written by Lynn Nottage directed by Jennifer Nelson. The set was designed by Lydia Francis, Lighting was designed by Max Doolittle.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Strawberries harvested for processing as frozen fruits are currently de-calyxed manually in the field. This process requires the removal of the stem cap with green leaves (i.e. the calyx) and incurs many disadvantages when performed by hand. Not only does it necessitate the need to maintain cutting tool sanitation, but it also increases labor time and exposure of the de-capped strawberries before in-plant processing. This leads to labor inefficiency and decreased harvest yield. By moving the calyx removal process from the fields to the processing plants, this new practice would reduce field labor and improve management and logistics, while increasing annual yield. As labor prices continue to increase, the strawberry industry has shown great interest in the development and implementation of an automated calyx removal system. In response, this dissertation describes the design, operation, and performance of a full-scale automatic vision-guided intelligent de-calyxing (AVID) prototype machine. The AVID machine utilizes commercially available equipment to produce a relatively low cost automated de-calyxing system that can be retrofitted into existing food processing facilities. This dissertation is broken up into five sections. The first two sections include a machine overview and a 12-week processing plant pilot study. Results of the pilot study indicate the AVID machine is able to de-calyx grade-1-with-cap conical strawberries at roughly 66 percent output weight yield at a throughput of 10,000 pounds per hour. The remaining three sections describe in detail the three main components of the machine: a strawberry loading and orientation conveyor, a machine vision system for calyx identification, and a synchronized multi-waterjet knife calyx removal system. In short, the loading system utilizes rotational energy to orient conical strawberries. The machine vision system determines cut locations through RGB real-time feature extraction. The high-speed multi-waterjet knife system uses direct drive actuation to locate 30,000 psi cutting streams to precise coordinates for calyx removal. Based on the observations and studies performed within this dissertation, the AVID machine is seen to be a viable option for automated high-throughput strawberry calyx removal. A summary of future tasks and further improvements is discussed at the end.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Any other technology has never affected daily life at this level and witnessed as speedy adaptation as the mobile phone. At the same time, mobile media has developed to be a serious marketing tool for all kinds of businesses, and the industry has grown explosively in recent years. The objective of this thesis is to inspect the mobile marketing process of an international event. This thesis is a qualitative case study. The chosen case for this thesis is the mobile marketing process of Falun2015 FIS Nordic World Ski Championships due to researcher’s interest on the topic and contacts to the people around the event. The empirical findings were acquired by conducting two interviews with three experts from the case organisation and its partner organisation. The interviews were performed as semi-structured interviews utilising the themes arising from the chosen theoretical framework. The framework distinguished six phases in the process: (i) campaign initiation, (ii) campaign design, (iii) campaign creation, (iv) permission management, (v) delivery, and (vi) evaluation and analysis. Phases one and five were not examined in this thesis because campaign initiation was not purely seen as part of the campaign implementation, and investigating phase five would have required a very technical viewpoint to the study. In addition to the interviews, some pre-established documents were exploited as a supporting data. The empirical findings of this thesis mainly follow the theoretical framework utilised. However, some modifications to the model could be made mainly related to the order of different phases. In the revised model, the actions are categorised depending on the time they should be conducted, i.e. before, during or after the event. Regardless of the categorisation, the phases can be in different order and overlapping. In addition, the business network was highly emphasised by the empirical findings and is thus added to the modified model. Five managerial recommendations can be concluded from the empirical findings of this thesis: (i) the importance of a business network should be highly valued in a mobile marketing process; (ii) clear goals should be defined for mobile marketing actions in order to make sure that everyone involved is aware them; (iii) interactivity should be perceived as part of a mobile marketing communication; (iv) enough time should be allowed for the development of a mobile marketing process in order to exploit all the potential it can offer; and (v) attention should be paid to measuring and analysing matters that are of relevance

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Modern electric machine drives, particularly three phase permanent magnet machine drive systems represent an indispensable part of high power density products. Such products include; hybrid electric vehicles, large propulsion systems, and automation products. Reliability and cost of these products are directly related to the reliability and cost of these systems. The compatibility of the electric machine and its drive system for optimal cost and operation has been a large challenge in industrial applications. The main objective of this dissertation is to find a design and control scheme for the best compromise between the reliability and optimality of the electric machine-drive system. The effort presented here is motivated by the need to find new techniques to connect the design and control of electric machines and drive systems. A highly accurate and computationally efficient modeling process was developed to monitor the magnetic, thermal, and electrical aspects of the electric machine in its operational environments. The modeling process was also utilized in the design process in form finite element based optimization process. It was also used in hardware in the loop finite element based optimization process. The modeling process was later employed in the design of a very accurate and highly efficient physics-based customized observers that are required for the fault diagnosis as well the sensorless rotor position estimation. Two test setups with different ratings and topologies were numerically and experimentally tested to verify the effectiveness of the proposed techniques. The modeling process was also employed in the real-time demagnetization control of the machine. Various real-time scenarios were successfully verified. It was shown that this process gives the potential to optimally redefine the assumptions in sizing the permanent magnets of the machine and DC bus voltage of the drive for the worst operating conditions. The mathematical development and stability criteria of the physics-based modeling of the machine, design optimization, and the physics-based fault diagnosis and the physics-based sensorless technique are described in detail. To investigate the performance of the developed design test-bed, software and hardware setups were constructed first. Several topologies of the permanent magnet machine were optimized inside the optimization test-bed. To investigate the performance of the developed sensorless control, a test-bed including a 0.25 (kW) surface mounted permanent magnet synchronous machine example was created. The verification of the proposed technique in a range from medium to very low speed, effectively show the intelligent design capability of the proposed system. Additionally, to investigate the performance of the developed fault diagnosis system, a test-bed including a 0.8 (kW) surface mounted permanent magnet synchronous machine example with trapezoidal back electromotive force was created. The results verify the use of the proposed technique under dynamic eccentricity, DC bus voltage variations, and harmonic loading condition make the system an ideal case for propulsion systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Research in human computer interaction (HCI) covers both technological and human behavioural concerns. As a consequence, the contributions made in HCI research tend to be aware to either engineering or the social sciences. In HCI the purpose of practical research contributions is to reveal unknown insights about human behaviour and its relationship to technology. Practical research methods normally used in HCI include formal experiments, field experiments, field studies, interviews, focus groups, surveys, usability tests, case studies, diary studies, ethnography, contextual inquiry, experience sampling, and automated data collection. In this paper, we report on our experience using the evaluation methods focus groups, surveys and interviews and how we adopted these methods to develop artefacts: either interface’s design or information and technological systems. Four projects are examples of the different methods application to gather information about user’s wants, habits, practices, concerns and preferences. The goal was to build an understanding of the attitudes and satisfaction of the people who might interact with a technological artefact or information system. Conversely, we intended to design for information systems and technological applications, to promote resilience in organisations (a set of routines that allow to recover from obstacles) and user’s experiences. Organisations can here also be viewed within a system approach, which means that the system perturbations even failures could be characterized and improved. The term resilience has been applied to everything from the real estate, to the economy, sports, events, business, psychology, and more. In this study, we highlight that resilience is also made up of a number of different skills and abilities (self-awareness, creating meaning from other experiences, self-efficacy, optimism, and building strong relationships) that are a few foundational ingredients, which people should use along with the process of enhancing an organisation’s resilience. Resilience enhances knowledge of resources available to people confronting existing problems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The central objective of this case study was to formulate the strategy of internationalization of Tubofuro®, discriminating relevant points from its design to its implementation. This is a company located in Leiria, Ortigosa parish, which operates, among others, in the Portuguese PVC pipes industry for which currently the domestic market is clearly insufficient, given the oversupply compared to demand. Being Tubofuro® an exporting company since 2004, the work here developed specifically intended to increase sales to the foreign market, with this representing 45% of total company's business in 2018 increasing of the number of markets through new partners to enable the positioning of Tubofuro® among the main players in each market, particularly in South American markets, North African and European. To achieve the above objectives presented a case study was applied, centred on Tubofuro® company, target of the internationalization strategy. The search carried out for the formulation of the strategy has been supported on a thorough analysis of the external environment and internal characteristics of the company, for which were crossed different types of data, quantitative, qualitative, secondary data and primary data. From this work resulted the development of internationalization and international marketing plan for the next three years, whose objectives are based on entrance and consequent growth in new markets, including the market Chilean, Peruvian, Mexican, Argentine, Algerian and German, as well growth in the presence and turnover in the markets for which Tubofuro® already exports regularly, for example Spain, France, Tunisia and Morocco. Based on the production capacity of Tubofuro® company, which will not suffer any kind of investment for incrementing but only to update, it is expected that the appropriate response capacity for the company is 8 regular markets, and could eventually arise sporadic exports to other markets not interfering with the normal production capacity of the company. The suggestion of the presented markets resulted from the study of the final price based on the one that local customers purchase a product equal or similar to Tubofuro® and the number of potential existing customers in each market. The internationalization model known as Uppsala Model corresponds to the strategy adopted by the company to its internationalization process, taking into account the philosophy of senior management and the risk aversion of them. The sales team Tubofuro® demand for each market, export a full container registering customer feedback, including quality and flow capacity in the market in order to seek a partnership agreement with a local distributor, which allows the Tubofuro® go to step two above mentioned model. The partnership agreement is based on mutual commitment to technical cooperation and trade between the Tubofuro® and partner, in order to increase the performance capacity among local customers. Only if the market presents a greater demand to our supply capacity and be justified by cost / benefit ratio, the entry into this market through a joint venture or subsidiary is that the decision will be taken. Although this is a case study, which means that is adjusted to the concrete case Tubofuro® preventing generalization of findings, we believe that this work can be a useful example for other companies in the internationalization process or the methodology adopted in formulating strategy or the outputs and conclusions drawn.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Shearing is the process where sheet metal is mechanically cut between two tools. Various shearing technologies are commonly used in the sheet metal industry, for example, in cut to length lines, slitting lines, end cropping etc. Shearing has speed and cost advantages over competing cutting methods like laser and plasma cutting, but involves large forces on the equipment and large strains in the sheet material. The constant development of sheet metals toward higher strength and formability leads to increased forces on the shearing equipment and tools. Shearing of new sheet materials imply new suitable shearing parameters. Investigations of the shearing parameters through live tests in the production are expensive and separate experiments are time consuming and requires specialized equipment. Studies involving a large number of parameters and coupled effects are therefore preferably performed by finite element based simulations. Accurate experimental data is still a prerequisite to validate such simulations. There is, however, a shortage of accurate experimental data to validate such simulations. In industrial shearing processes, measured forces are always larger than the actual forces acting on the sheet, due to friction losses. Shearing also generates a force that attempts to separate the two tools with changed shearing conditions through increased clearance between the tools as result. Tool clearance is also the most common shearing parameter to adjust, depending on material grade and sheet thickness, to moderate the required force and to control the final sheared edge geometry. In this work, an experimental procedure that provides a stable tool clearance together with accurate measurements of tool forces and tool displacements, was designed, built and evaluated. Important shearing parameters and demands on the experimental set-up were identified in a sensitivity analysis performed with finite element simulations under the assumption of plane strain. With respect to large tool clearance stability and accurate force measurements, a symmetric experiment with two simultaneous shears and internal balancing of forces attempting to separate the tools was constructed. Steel sheets of different strength levels were sheared using the above mentioned experimental set-up, with various tool clearances, sheet clamping and rake angles. Results showed that tool penetration before fracture decreased with increased material strength. When one side of the sheet was left unclamped and free to move, the required shearing force decreased but instead the force attempting to separate the two tools increased. Further, the maximum shearing force decreased and the rollover increased with increased tool clearance. Digital image correlation was applied to measure strains on the sheet surface. The obtained strain fields, together with a material model, were used to compute the stress state in the sheet. A comparison, up to crack initiation, of these experimental results with corresponding results from finite element simulations in three dimensions and at a plane strain approximation showed that effective strains on the surface are representative also for the bulk material. A simple model was successfully applied to calculate the tool forces in shearing with angled tools from forces measured with parallel tools. These results suggest that, with respect to tool forces, a plane strain approximation is valid also at angled tools, at least for small rake angles. In general terms, this study provide a stable symmetric experimental set-up with internal balancing of lateral forces, for accurate measurements of tool forces, tool displacements, and sheet deformations, to study the effects of important shearing parameters. The results give further insight to the strain and stress conditions at crack initiation during shearing, and can also be used to validate models of the shearing process.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This article aims to explore the relationship between clients´ narrative transformation and the promotion of vocational decidedness and career maturity in a mid-adolescent case of Life Design Counseling (LDC). To assess LDC outcomes the Vocational Certainty Scale and the Career Maturity Inventory – Form C were used before and after the intervention. To intensively analyze the process of LDC change two measures of narrative change were used: the Innovative Moments Coding System (IMCS), as a measure of innovation emergence, and the Return to the Problem Coding System (RPCS), as a measure of ambivalence towards change. The results show that the three LDC sessions produced a significant change in vocational certainty but not in career maturity. Findings confirm that the process of change, according to the IMCS, is similar to the one observed in previous studies with adults. Implications for future research and practice are discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Transdisciplinarity gained importance in the 1970s, with the initial signs of weakness of both multi- and interdisciplinary approaches. This weakness was felt due to the increased complexity in the social and technological landscapes. Generally, discussion over the transdisciplinary topic is centred in social and health sciences. Therefore, the major challenge in this research is to adapt design research to the emerging transdisciplinary discussion. Based on a comparative and critical review of several engineering and design models for the design process, we advocate the importance of collaboration and conceptualisation for these disciplines. Therefore, a transdisciplinary and conceptual cooperation between engineering and industrial design disciplines is considered as decisive to create breakthroughs. Furthermore, a synthesis is proposed, in order to foster the cooperation between engineering and industrial design.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This work deals with the development of calibration procedures and control systems to improve the performance and efficiency of modern spark ignition turbocharged engines. The algorithms developed are used to optimize and manage the spark advance and the air-to-fuel ratio to control the knock and the exhaust gas temperature at the turbine inlet. The described work falls within the activity that the research group started in the previous years with the industrial partner Ferrari S.p.a. . The first chapter deals with the development of a control-oriented engine simulator based on a neural network approach, with which the main combustion indexes can be simulated. The second chapter deals with the development of a procedure to calibrate offline the spark advance and the air-to-fuel ratio to run the engine under knock-limited conditions and with the maximum admissible exhaust gas temperature at the turbine inlet. This procedure is then converted into a model-based control system and validated with a Software in the Loop approach using the engine simulator developed in the first chapter. Finally, it is implemented in a rapid control prototyping hardware to manage the combustion in steady-state and transient operating conditions at the test bench. The third chapter deals with the study of an innovative and cheap sensor for the in-cylinder pressure measurement, which is a piezoelectric washer that can be installed between the spark plug and the engine head. The signal generated by this kind of sensor is studied, developing a specific algorithm to adjust the value of the knock index in real-time. Finally, with the engine simulator developed in the first chapter, it is demonstrated that the innovative sensor can be coupled with the control system described in the second chapter and that the performance obtained could be the same reachable with the standard in-cylinder pressure sensors.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the last decades, organic semiconductors have attracted attention due to their possible employment in solution-processed optoelectronic and electronic devices. One of the advantages of solution processing is the possibility to process into flexible substrates at low cost. Organic molecular materials tend to form polymorphs, which can exhibit very different properties. In most cases, the control of the crystal structure is decisive to maximize the performance of the final device. Although organic electronics have progressed a lot, n-type organic semiconductors still lag behind p-type, presenting challenges such as air instability and poor solubility. NDI derivatives are promising candidates for applications in organic electronics due to their characteristics. Recently, the structure-properties relationship and the polymorphism of these molecules have gained attention. In the first part of this thesis, NDI-C6 thermal behavior was extensively explored which revealed two different behaviors depending on the annealing process. This study allowed to define the stability ranking of the NDI-C6 bulk forms and to determine the crystal structure of Form γ at 54°C. Additionally, the polymorphic and thermal behavior of thin films of NDI-C6 was also explored. It was possible to isolate pure Form α, Form β, Form γ and a new metastable Form ε. It was also possible to determine the stability ranking of the phases in thin films. OFETs were fabricated having different polymorphs as active layer, unfortunately the performance was not ideal. During the second part of this thesis, core-chlorinated NDIs with fluoroalkyl chains were studied. Initially, the focus was on the polymorphism of CF3-NDI that revealed a solvate form with a very interesting molecular arrangement suggesting the possibility to form charge transfer co-crystals. In the last part of the thesis, the synthesis and characterization of CT co-crystal with different NDI derivatives, and acceptor and as donor BTBT and ditBu-BTBT were explored.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The design process of any electric vehicle system has to be oriented towards the best energy efficiency, together with the constraint of maintaining comfort in the vehicle cabin. Main aim of this study is to research the best thermal management solution in terms of HVAC efficiency without compromising occupant’s comfort and internal air quality. An Arduino controlled Low Cost System of Sensors was developed and compared against reference instrumentation (average R-squared of 0.92) and then used to characterise the vehicle cabin in real parking and driving conditions trials. Data on the energy use of the HVAC was retrieved from the car On-Board Diagnostic port. Energy savings using recirculation can reach 30 %, but pollutants concentration in the cabin builds up in this operating mode. Moreover, the temperature profile appeared strongly nonuniform with air temperature differences up to 10° C. Optimisation methods often require a high number of runs to find the optimal configuration of the system. Fast models proved to be beneficial for these task, while CFD-1D model are usually slower despite the higher level of detail provided. In this work, the collected dataset was used to train a fast ML model of both cabin and HVAC using linear regression. Average scaled RMSE over all trials is 0.4 %, while computation time is 0.0077 ms for each second of simulated time on a laptop computer. Finally, a reinforcement learning environment was built in OpenAI and Stable-Baselines3 using the built-in Proximal Policy Optimisation algorithm to update the policy and seek for the best compromise between comfort, air quality and energy reward terms. The learning curves show an oscillating behaviour overall, with only 2 experiments behaving as expected even if too slow. This result leaves large room for improvement, ranging from the reward function engineering to the expansion of the ML model.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this thesis is to demonstrate that 3D-printing technologies can be considered significantly attractive in the production of microwave devices and in the antenna design, with the intention of making them lightweight, cheaper, and easily integrable for the production of wireless, battery-free, and wearable devices for vital signals monitoring. In this work, a new 3D-printable, low-cost resin material, the Flexible80A, is proposed as RF substrate in the implementation of a rectifying antenna (rectenna) operating at 2.45 GHz for wireless power transfer. A careful and accurate electromagnetic characterization of the abovementioned material, revealing it to be a very lossy substrate, has paved the way for the investigation of innovative transmission line and antenna layouts, as well as etching techniques, possible thanks to the design freedom enabled by 3D-printing technologies with the aim of improving the wave propagation performance within lossy materials. This analysis is crucial in the design process of a patch antenna, meant to be successively connected to the rectifier. In fact, many different patch antenna layouts are explored varying the antenna dimensions, the substrate etchings shape and position, the feeding line technology, and the operating frequency. Before dealing with the rectification stage of the rectenna design, the hot and long-discussed topic of the equivalent receiving antenna circuit representation is addressed, providing an overview of the interpretation of different authors about the issue, and the position that has been adopted in this thesis. Furthermore, two rectenna designs are proposed and simulated with the aim of minimizing the dielectric losses. Finally, a prototype of a rectenna with the antenna conjugate matched to the rectifier, operating at 2.45 GHz, has been fabricated with adhesive copper on a substrate sample of Flexible80A and measured, in order to validate the simulated results.