877 resultados para system selection and implementation
Resumo:
This research is motivated by the need for considering lot sizing while accepting customer orders in a make-to-order (MTO) environment, in which each customer order must be delivered by its due date. Job shop is the typical operation model used in an MTO operation, where the production planner must make three concurrent decisions; they are order selection, lot size, and job schedule. These decisions are usually treated separately in the literature and are mostly led to heuristic solutions. The first phase of the study is focused on a formal definition of the problem. Mathematical programming techniques are applied to modeling this problem in terms of its objective, decision variables, and constraints. A commercial solver, CPLEX is applied to solve the resulting mixed-integer linear programming model with small instances to validate the mathematical formulation. The computational result shows it is not practical for solving problems of industrial size, using a commercial solver. The second phase of this study is focused on development of an effective solution approach to this problem of large scale. The proposed solution approach is an iterative process involving three sequential decision steps of order selection, lot sizing, and lot scheduling. A range of simple sequencing rules are identified for each of the three subproblems. Using computer simulation as the tool, an experiment is designed to evaluate their performance against a set of system parameters. For order selection, the proposed weighted most profit rule performs the best. The shifting bottleneck and the earliest operation finish time both are the best scheduling rules. For lot sizing, the proposed minimum cost increase heuristic, based on the Dixon-Silver method performs the best, when the demand-to-capacity ratio at the bottleneck machine is high. The proposed minimum cost heuristic, based on the Wagner-Whitin algorithm is the best lot-sizing heuristic for shops of a low demand-to-capacity ratio. The proposed heuristic is applied to an industrial case to further evaluate its performance. The result shows it can improve an average of total profit by 16.62%. This research contributes to the production planning research community with a complete mathematical definition of the problem and an effective solution approach to solving the problem of industry scale.
Resumo:
The purpose of this thesis was to develop an efficient routing protocol which would provide mobility support to the mobile devices roaming within a network. The routing protocol need to be compatible with the existing internet architecture. The routing protocol proposed here is based on the Mobile IP routing protocol and could solve some of the problems existing in current Mobile IP implementation e.g. ingress filtering problem. By implementing an efficient timeout mechanism and introducing Paging mechanism to the wireless network, the protocol minimizes the number of control messages sent over the network. The implementation of the system is primarily done on three components: 1) Mobile devices that need to gain access to the network, 2) Router which would be providing roaming support to the mobile devices and 3) Database server providing basic authentication services on the system. As a result, an efficient IP routing protocol is developed which would provide seamless mobility to the mobile devices on the network.
Resumo:
Using the NEODAAS-Dundee AVHRR receiving station (Scotland), NEODAAS-Plymouth can provide calibrated brightness temperature data to end users or interim users in near-real time. Between 2000 and 2009 these data were used to undertake volcano hot spot detection, reporting and time-average discharge rate dissemination during effusive crises at Mount Etna and Stromboli (Italy). Data were passed via FTP, within an hour of image generation, to the hot spot detection system maintained at Hawaii Institute of Geophysics and Planetology (HIGP, University of Hawaii at Manoa, Honolulu, USA). Final product generation and quality control were completed manually at HIGP once a day, so as to provide information to onsite monitoring agencies for their incorporation into daily reporting duties to Italian Civil Protection. We here describe the processing and dissemination chain, which was designed so as to provide timely, useable, quality-controlled and relevant information for ‘one voice’ reporting by the responsible monitoring agencies.
Resumo:
Using the NEODAAS-Dundee AVHRR receiving station (Scotland), NEODAAS-Plymouth can provide calibrated brightness temperature data to end users or interim users in near-real time. Between 2000 and 2009 these data were used to undertake volcano hot spot detection, reporting and time-average discharge rate dissemination during effusive crises at Mount Etna and Stromboli (Italy). Data were passed via FTP, within an hour of image generation, to the hot spot detection system maintained at Hawaii Institute of Geophysics and Planetology (HIGP, University of Hawaii at Manoa, Honolulu, USA). Final product generation and quality control were completed manually at HIGP once a day, so as to provide information to onsite monitoring agencies for their incorporation into daily reporting duties to Italian Civil Protection. We here describe the processing and dissemination chain, which was designed so as to provide timely, useable, quality-controlled and relevant information for ‘one voice’ reporting by the responsible monitoring agencies.
Resumo:
In this paper, we investigate the secrecy performance of an energy harvesting relay system, where a legitimate source communicates with a legitimate destination via the assistance of multiple trusted relays. In the considered system, the source and relays deploy the time-switching-based radio frequency energy harvesting technique to harvest energy from a multi-antenna beacon. Different antenna selection and relay selection schemes are applied to enhance the security of the system. Specifically, two relay selection schemes based on the partial and full knowledge of channel state information, i.e., optimal relay selection and partial relay selection, and two antenna selection schemes for harvesting energy at source and relays, i.e., maximizing energy harvesting channel for the source and maximizing energy harvesting channel for the selected relay, are proposed. The exact and asymptotic expressions of secrecy outage probability in these schemes are derived. We demonstrate that applying relay selection approaches in the considered energy harvesting system can enhance the security performance. In particular, optimal relay selection scheme outperforms partial relay selection scheme and achieves full secrecy diversity order, regardless of energy harvesting scenarios.
Resumo:
The paper describes the design and implementation of a novel low cost virtual rugby decision making interactive for use in a visitor centre. Original laboratory-based experimental work in decision making in rugby, using a virtual reality headset [1] is adapted for use in a public visitor centre, with consideration given to usability, costs, practicality and health and safety. Movement of professional rugby players was captured and animated within a virtually recreated stadium. Users then interact with these virtual representations via use of a lowcost sensor (Microsoft Kinect) to attempt to block them. Retaining the principles of perception and action, egocentric viewpoint, immersion, sense of presence, representative design and game design the system delivers an engaging and effective interactive to illustrate the underlying scientific principles of deceptive movement. User testing highlighted the need for usability, system robustness, fair and accurate scoring, appropriate level of difficulty and enjoyment.
Resumo:
In the last twenty years aerospace and automotive industries started working widely with composite materials, which are not easy to test using classic Non-Destructive Inspection (NDI) techniques. Pairwise, the development of safety regulations sets higher and higher standards for the qualification and certification of those materials. In this thesis a new concept of a Non-Destructive defect detection technique is proposed, based on Ultrawide-Band (UWB) Synthetic Aperture Radar (SAR) imaging. Similar SAR methods are yet applied either in minefield [22] and head stroke [14] detection. Moreover feasibility studies have already demonstrated the validity of defect detection by means of UWB radars [12, 13]. The system was designed using a cheap commercial off-the-shelf radar device by Novelda and several tests of the developed system have been performed both on metallic specimen (aluminum plate) and on composite coupon (carbon fiber). The obtained results confirm the feasibility of the method and highlight the good performance of the developed system considered the radar resolution. In particular, the system is capable of discerning healthy coupons from damaged ones, and correctly reconstruct the reflectivity image of the tested defects, namely a 8 x 8 mm square bulge and a 5 mm drilled holes on metal specimen and a 5 mm drilled hole on composite coupon.
Resumo:
Numerical modelling and simulations are needed to develop and test specific analysis methods by providing test data before BIRDY would be launched. This document describes the "satellite data simulator" which is a multi-sensor, multi-spectral satellite simulator produced especially for the BIRDY mission which could be used as well to analyse data from other satellite missions providing energetic particles data in the Solar system.
Resumo:
This portfolio thesis describes work undertaken by the author under the Engineering Doctorate program of the Institute for System Level Integration. It was carried out in conjunction with the sponsor company Teledyne Defence Limited. A radar warning receiver is a device used to detect and identify the emissions of radars. They were originally developed during the Second World War and are found today on a variety of military platforms as part of the platform’s defensive systems. Teledyne Defence has designed and built components and electronic subsystems for the defence industry since the 1970s. This thesis documents part of the work carried out to create Phobos, Teledyne Defence’s first complete radar warning receiver. Phobos was designed to be the first low cost radar warning receiver. This was made possible by the reuse of existing Teledyne Defence products, commercial off the shelf hardware and advanced UK government algorithms. The challenges of this integration are described and discussed, with detail given of the software architecture and the development of the embedded application. Performance of the embedded system as a whole is described and qualified within the context of a low cost system.
Resumo:
Development of no-tillage (NT) farming has revolutionized agricultural systems by allowing growers to manage greater areas of land with reduced energy, labour and machinery inputs to control erosion, improve soil health and reduce greenhouse gas emission. However, NT farming systems have resulted in a build-up of herbicide-resistant weeds, an increased incidence of soil- and stubble-borne diseases and enrichment of nutrients and carbon near the soil surface. Consequently, there is an increased interest in the use of an occasional tillage (termed strategic tillage, ST) to address such emerging constraints in otherwise-NT farming systems. Decisions around ST uses will depend upon the specific issues present on the individual field or farm, and profitability and effectiveness of available options for management. This paper explores some of the issues with the implementation of ST in NT farming systems. The impact of contrasting soil properties, the timing of the tillage and the prevailing climate exert a strong influence on the success of ST. Decisions around timing of tillage are very complex and depend on the interactions between soil water content and the purpose for which the ST is intended. The soil needs to be at the right water content before executing any tillage, while the objective of the ST will influence the frequency and type of tillage implement used. The use of ST in long-term NT systems will depend on factors associated with system costs and profitability, soil health and environmental impacts. For many farmers maintaining farm profitability is a priority, so economic considerations are likely to be a primary factor dictating adoption. However, impacts on soil health and environment, especially the risk of erosion and the loss of soil carbon, will also influence a grower's choice to adopt ST, as will the impact on soil moisture reserves in rainfed cropping systems. © 2015 Elsevier B.V.
Resumo:
Biogeochemical-Argo is the extension of the Argo array of profiling floats to include floats that are equipped with biogeochemical sensors for pH, oxygen, nitrate, chlorophyll, suspended particles, and downwelling irradiance. Argo is a highly regarded, international program that measures the changing ocean temperature (heat content) and salinity with profiling floats distributed throughout the ocean. Newly developed sensors now allow profiling floats to also observe biogeochemical properties with sufficient accuracy for climate studies. This extension of Argo will enable an observing system that can determine the seasonal to decadal-scale variability in biological productivity, the supply of essential plant nutrients from deep-waters to the sunlit surface layer, ocean acidification, hypoxia, and ocean uptake of CO2. Biogeochemical-Argo will drive a transformative shift in our ability to observe and predict the effects of climate change on ocean metabolism, carbon uptake, and living marine resource management. Presently, vast areas of the open ocean are sampled only once per decade or less, with sampling occurring mainly in summer. Our ability to detect changes in biogeochemical processes that may occur due to the warming and acidification driven by increasing atmospheric CO2, as well as by natural climate variability, is greatly hindered by this undersampling. In close synergy with satellite systems (which are effective at detecting global patterns for a few biogeochemical parameters, but only very close to the sea surface and in the absence of clouds), a global array of biogeochemical sensors would revolutionize our understanding of ocean carbon uptake, productivity, and deoxygenation. The array would reveal the biological, chemical, and physical events that control these processes. Such a system would enable a new generation of global ocean prediction systems in support of carbon cycling, acidification, hypoxia and harmful algal blooms studies, as well as the management of living marine resources. In order to prepare for a global Biogeochemical-Argo array, several prototype profiling float arrays have been developed at the regional scale by various countries and are now operating. Examples include regional arrays in the Southern Ocean (SOCCOM ), the North Atlantic Sub-polar Gyre (remOcean ), the Mediterranean Sea (NAOS ), the Kuroshio region of the North Pacific (INBOX ), and the Indian Ocean (IOBioArgo ). For example, the SOCCOM program is deploying 200 profiling floats with biogeochemical sensors throughout the Southern Ocean, including areas covered seasonally with ice. The resulting data, which are publically available in real time, are being linked with computer models to better understand the role of the Southern Ocean in influencing CO2 uptake, biological productivity, and nutrient supply to distant regions of the world ocean. The success of these regional projects has motivated a planning meeting to discuss the requirements for and applications of a global-scale Biogeochemical-Argo program. The meeting was held 11-13 January 2016 in Villefranche-sur-Mer, France with attendees from eight nations now deploying Argo floats with biogeochemical sensors present to discuss this topic. In preparation, computer simulations and a variety of analyses were conducted to assess the resources required for the transition to a global-scale array. Based on these analyses and simulations, it was concluded that an array of about 1000 biogeochemical profiling floats would provide the needed resolution to greatly improve our understanding of biogeochemical processes and to enable significant improvement in ecosystem models. With an endurance of four years for a Biogeochemical-Argo float, this system would require the procurement and deployment of 250 new floats per year to maintain a 1000 float array. The lifetime cost for a Biogeochemical-Argo float, including capital expense, calibration, data management, and data transmission, is about $100,000. A global Biogeochemical-Argo system would thus cost about $25,000,000 annually. In the present Argo paradigm, the US provides half of the profiling floats in the array, while the EU, Austral/Asia, and Canada share most the remaining half. If this approach is adopted, the US cost for the Biogeochemical-Argo system would be ~$12,500,000 annually and ~$6,250,000 each for the EU, and Austral/Asia and Canada. This includes no direct costs for ship time and presumes that float deployments can be carried out from future research cruises of opportunity, including, for example, the international GO-SHIP program (http://www.go-ship.org). The full-scale implementation of a global Biogeochemical-Argo system with 1000 floats is feasible within a decade. The successful, ongoing pilot projects have provided the foundation and start for such a system.
Resumo:
This document presents an Enterprise Application Integration based proposal for research outcomes and technological information management. The proposal addresses national and international science and research outcomes information management, and corresponding information systems. Information systems interoperability problems, approaches, technologies and integration tools are presented and applied to the research outcomes information management case. A business and technological perspective is provided, including the conceptual analysis and modelling, an integration solution based in a Domain-Specific Language (DSL) and the integration platform to execute the proposed solution. For illustrative purposes, the role and information system needs of a research unit is assumed as the representative case.
Resumo:
The thesis work deals with topics that led to the development of innovative control-oriented models and control algorithms for modern gasoline engines. Knock in boosted spark ignition engines is the widest topic discussed in this document because it remains one of the most limiting factors for maximizing combustion efficiency in this kind of engine. First chapter is thus focused on knock and a wide literature review is proposed to summarize the preliminary knowledge that even represents the background and the reference for discussed activities. Most relevant results achieved during PhD course in the field of knock modelling and control are then presented, describing every control-oriented model that led to the development of an adaptive model-based combustion control system. The complete controller has been developed in the context of the collaboration with Ferrari GT and it allowed to completely redefine the knock intensity evaluation as well as the combustion phase control. The second chapter is focused on the activity related to a prototyping Port Water Injection system that has been developed and tested on a turbocharged spark ignition engine, within the collaboration with Magneti Marelli. Such system and the effects of injected water on the combustion process were then modeled in a 1-D simulation environment (GT Power). Third chapter shows the development and validation of a control-oriented model for the real-time calculation of exhaust gas temperature that represents another important limitation to the performance increase in modern boosted engines. Indeed, modelling of exhaust gas temperature and thermocouple behavior are themes that play a key role in the optimization of combustion and catalyst efficiency.
Resumo:
The thesis is focused on introducing basic MIMO-based and Massive MIMO-based systems and their possible benefits. Then going through the implementation options that we have, according to 3GPP standards, for 5G systems and how the transition is done from a non-standalone 5G RAN to a completely standalone 5G RAN. Having introduced the above-mentioned subjects and providing some definition of telecommunications principles, we move forward to a more technical analysis of the Capacity, Throughput, Power consumption, and Costs. Comparing all the mentioned parameters between a Massive-MIMO-based system and a MIMO-based system. In the analysis of power consumption and costs, we also introduce the concept of virtualization and its benefits in terms of both power and costs. Finally, we try to justify a trade-off between having a more reliable system with a high capacity and throughput while keeping the costs as low as possible.
Resumo:
Driving simulators emulate a real vehicle drive in a virtual environment. One of the most challenging problems in this field is to create a simulated drive as real as possible to deceive the driver's senses and cause the believing to be in a real vehicle. This thesis first provides an overview of the Stuttgart driving simulator with a description of the overall system, followed by a theoretical presentation of the commonly used motion cueing algorithms. The second and predominant part of the work presents the implementation of the classical and optimal washout algorithms in a Simulink environment. The project aims to create a new optimal washout algorithm and compare the obtained results with the results of the classical washout. The classical washout algorithm, already implemented in the Stuttgart driving simulator, is the most used in the motion control of the simulator. This classical algorithm is based on a sequence of filters in which each parameter has a clear physical meaning and a unique assignment to a single degree of freedom. However, the effects on human perception are not exploited, and each parameter must be tuned online by an engineer in the control room, depending on the driver's feeling. To overcome this problem and also consider the driver's sensations, the optimal washout motion cueing algorithm was implemented. This optimal control-base algorithm treats motion cueing as a tracking problem, forcing the accelerations perceived in the simulator to track the accelerations that would have been perceived in a real vehicle, by minimizing the perception error within the constraints of the motion platform. The last chapter presents a comparison between the two algorithms, based on the driver's feelings after the test drive. Firstly it was implemented an off-line test with a step signal as an input acceleration to verify the behaviour of the simulator. Secondly, the algorithms were executed in the simulator during a test drive on several tracks.