885 resultados para Advanced Planning and Scheduling systems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational performance increasingly depends on parallelism, and many systems rely on heterogeneous resources such as GPUs and FPGAs to accelerate computationally intensive applications. However, implementations for such heterogeneous systems are often hand-crafted and optimised to one computation scenario, and it can be challenging to maintain high performance when application parameters change. In this paper, we demonstrate that machine learning can help to dynamically choose parameters for task scheduling and load-balancing based on changing characteristics of the incoming workload. We use a financial option pricing application as a case study. We propose a simulation of processing financial tasks on a heterogeneous system with GPUs and FPGAs, and show how dynamic, on-line optimisations could improve such a system. We compare on-line and batch processing algorithms, and we also consider cases with no dynamic optimisations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents a detailed numerical analysis, fabrication method and experimental investigation on 45º tilted fiber gratings (45º-TFGs) and excessively tilted fiber gratings (Ex-TFGs), and their applications in fiber laser and sensing systems. The one of the most significant contributions of the work reported in this thesis is that the 45º-TFGs with high polarization extinction ratio (PER) have been fabricated in single mode telecom and polarization maintaining (PM) fibers with spectral response covering three prominent optic communication and central wavelength ranges at 1060nm, 1310nm and 1550nm. The most achieved PERs for the 45º-TFGs are up to and greater than 35-50dB, which have reached and even exceeded many commercial in-fiber polarizers. It has been proposed that the 45º-TFGs of high PER can be used as ideal in-fiber polarizers for a wide range of fiber systems and applications. In addition, in-depth detailed theoretical models and analysis have been developed and systematic experimental evaluation has been conducted producing results in excellent agreement with theoretical modeling. Another important outcome of the research work is the proposal and demonstration of all fiber Lyot filters (AFLFs) implemented by utilizing two (for a single stage type) and more (for multi-stage) 45º-TFGs in PM fiber cavity structure. The detailed theoretical analysis and modelling of such AFLFs have also been carried out giving design guidance for the practical implementation. The unique function advantages of 45º-TFG based AFLFs have been revealed, showing high finesse multi-wavelength transmission of single polarization and wide range of tuneability. The temperature tuning results of AFLFs have shown that the AFLFs have 60 times higher thermal sensitivity than the normal FBGs, thus permitting thermal tuning rate of ~8nm/10ºC. By using an intra-cavity AFLF, an all fiber soliton mode locking laser with almost total suppression of siliton sidebands, single polarization output and single/multi-wavelength switchable operation has been demonstrated. The final significant contribution is the theoretical analysis and experimental verification on the design, fabrication and sensing application of Ex-TFGs. The Ex-TFG sensitivity model to the surrounding medium refractive index (SRI) has been developed for the first time, and the factors that affect the thermal and SRI sensitivity in relation to the wavelength range, tilt angle, and the size of cladding have been investigated. As a practical SRI sensor, an 81º-TFG UV-inscribed in the fiber with small (40μm) cladding radius has shown an SRI sensitivity up to 1180nm/RIU in the index of 1.345 range. Finally, to ensure single polarization detection in such an SRI sensor, a hybrid configuration by UV-inscribing a 45º-TFG and an 81º-TFG closely on the same piece of fiber has been demonstrated as a more advanced SRI sensing system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Systems analysis (SA) is widely used in complex and vague problem solving. Initial stages of SA are analysis of problems and purposes to obtain problems/purposes of smaller complexity and vagueness that are combined into hierarchical structures of problems(SP)/purposes(PS). Managers have to be sure the PS and the purpose realizing system (PRS) that can achieve the PS-purposes are adequate to the problem to be solved. However, usually SP/PS are not substantiated well enough, because their development is based on a collective expertise in which logic of natural language and expert estimation methods are used. That is why scientific foundations of SA are not supposed to have been completely formed. The structure-and-purpose approach to SA based on a logic-and-linguistic simulation of problems/purposes analysis is a step towards formalization of the initial stages of SA to improve adequacy of their results, and also towards increasing quality of SA as a whole. Managers of industrial organizing systems using the approach eliminate logical errors in SP/PS at early stages of planning and so they will be able to find better decisions of complex and vague problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Progress on advanced active and passive photonic components that are required for high-speed optical communications over hollow-core photonic bandgap fiber at wavelengths around 2 μm is described in this paper. Single-frequency lasers capable of operating at 10 Gb/s and covering a wide spectral range are realized. A comparison is made between waveguide and surface normal photodiodes with the latter showing good sensitivity up to 15 Gb/s. Passive waveguides, 90° optical hybrids, and arrayed waveguide grating with 100-GHz channel spacing are demonstrated on a large spot-size waveguide platform. Finally, a strong electro-optic effect using the quantum confined Stark effect in strain-balanced multiple quantum wells is demonstrated and used in a Mach-Zehnder modulator capable of operating at 10 Gb/s.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper a new approach to the resource allocation and scheduling mechanism that reflects the effect of user's Quality of Experience is presented. The proposed scheduling algorithm is examined in the context of 3GPP Long Term Evolution (LTE) system. Pause Intensity (PI) as an objective and no-reference quality assessment metric is employed to represent user's satisfaction in the scheduler of eNodeB. PI is in fact a measurement of discontinuity in the service. The performance of the scheduling method proposed is compared with two extreme cases: maxCI and Round Robin scheduling schemes which correspond to the efficiency and fairness oriented mechanisms, respectively. Our work reveals that the proposed method is able to perform between fairness and efficiency requirements, in favor of higher satisfaction for the users to the desired level. © VDE VERLAG GMBH.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Floods are one of the most dangerous and common disasters worldwide, and these disasters are closely linked to the geography of the affected area. As a result, several papers in the academic field of humanitarian logistics have incorporated the use of Geographical Information Systems (GIS) for disaster management. However, most of the contributions in the literature are using these systems for network analysis and display, with just a few papers exploiting the capabilities of GIS to improve planning and preparedness. To show the capabilities of GIS for disaster management, this paper uses raster GIS to analyse potential flooding scenarios and provide input to an optimisation model. The combination is applied to two real-world floods in Mexico to evaluate the value of incorporating GIS for disaster planning. The results provide evidence that including GIS analysis for a decision-making tool in disaster management can improve the outcome of disaster operations by reducing the number of facilities used at risk of flooding. Empirical results imply the importance of the integration of advanced remote sensing images and GIS for future systems in humanitarian logistics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A dolgozat a visszutas logisztikát, az újrahasznosítást igyekszik beilleszteni a vállalati termeléstervezés keretei közé. A szükséglettervezési rendszerek (material requirements planning, MRP) célja a készletek és beszerzendő anyagok, alkatrészek időben ütemezett gyártásának és beszerzésének megtervezése. A klasszikus MRP rendszereket az utóbbi időben próbálja a tudomány az újrahasznosítással kibővíteni. Mivel ebben az esetben az új, és újrafelhasználható anyagokat külön kell nyilvántartani, ezért az MRP-táblák és készletek növekednek. A rendelési tételnagyságok meghatározása is nehezebb, összetettebb tételnagysághoz vezet. A dolgozatban egy visszutas logisztikai készletmodellt ismertetünk, valamint annak dinamikus kiterjesztését, amely alapja lehet az SAP-ba beépíthető rendelés állomány meghatározó heurisztikának. ____ The aim of the paper is to extend production planning with reverse logistics and reuse. Material requirements planning (MRP) systems plan and control invetory levels and purchasing activities of the firm. In the last decade scientists on this field try to involve reverse logistics activities in MRP systems. Size of MRP-tables is growing in this case because of the alternative use of newly purchased products and reusable old items. Determination of order quantities will be more complex with these two modes of material supplies. An EOQ-type reverse logistics model is presented in the paper with a dynamic lot size generalization. The generalized model can be seen as a basic model to build in production planning and control system like SAP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Choosing between Light Rail Transit (LRT) and Bus Rapid Transit (BRT) systems is often controversial and not an easy task for transportation planners who are contemplating the upgrade of their public transportation services. These two transit systems provide comparable services for medium-sized cities from the suburban neighborhood to the Central Business District (CBD) and utilize similar right-of-way (ROW) categories. The research is aimed at developing a method to assist transportation planners and decision makers in determining the most feasible system between LRT and BRT. ^ Cost estimation is a major factor when evaluating a transit system. Typically, LRT is more expensive to build and implement than BRT, but has significantly lower Operating and Maintenance (OM) costs than BRT. This dissertation examines the factors impacting capacity and costs, and develops cost models, which are a capacity-based cost estimate for the LRT and BRT systems. Various ROW categories and alignment configurations of the systems are also considered in the developed cost models. Kikuchi's fleet size model (1985) and cost allocation method are used to develop the cost models to estimate the capacity and costs. ^ The comparison between LRT and BRT are complicated due to many possible transportation planning and operation scenarios. In the end, a user-friendly computer interface integrated with the established capacity-based cost models, the LRT and BRT Cost Estimator (LBCostor), was developed by using Microsoft Visual Basic language to facilitate the process and will guide the users throughout the comparison operations. The cost models and the LBCostor can be used to analyze transit volumes, alignments, ROW configurations, number of stops and stations, headway, size of vehicle, and traffic signal timing at the intersections. The planners can make the necessary changes and adjustments depending on their operating practices. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Access to healthcare is a major problem in which patients are deprived of receiving timely admission to healthcare. Poor access has resulted in significant but avoidable healthcare cost, poor quality of healthcare, and deterioration in the general public health. Advanced Access is a simple and direct approach to appointment scheduling in which the majority of a clinic's appointments slots are kept open in order to provide access for immediate or same day healthcare needs and therefore, alleviate the problem of poor access the healthcare. This research formulates a non-linear discrete stochastic mathematical model of the Advanced Access appointment scheduling policy. The model objective is to maximize the expected profit of the clinic subject to constraints on minimum access to healthcare provided. Patient behavior is characterized with probabilities for no-show, balking, and related patient choices. Structural properties of the model are analyzed to determine whether Advanced Access patient scheduling is feasible. To solve the complex combinatorial optimization problem, a heuristic that combines greedy construction algorithm and neighborhood improvement search was developed. The model and the heuristic were used to evaluate the Advanced Access patient appointment policy compared to existing policies. Trade-off between profit and access to healthcare are established, and parameter analysis of input parameters was performed. The trade-off curve is a characteristic curve and was observed to be concave. This implies that there exists an access level at which at which the clinic can be operated at optimal profit that can be realized. The results also show that, in many scenarios by switching from existing scheduling policy to Advanced Access policy clinics can improve access without any decrease in profit. Further, the success of Advanced Access policy in providing improved access and/or profit depends on the expected value of demand, variation in demand, and the ratio of demand for same day and advanced appointments. The contributions of the dissertation are a model of Advanced Access patient scheduling, a heuristic to solve the model, and the use of the model to understand the scheduling policy trade-offs which healthcare clinic managers must make. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of "cloud computing" services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: (1) An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. (2) A performance prediction methodology applicable to the target environment. (3) A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20–30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. This thesis describes a heterogeneous database system being developed at Highperformance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i.) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii.) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii.) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv.) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v.) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi.) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii.) a framework for intelligent computing and communication on the Internet applying the concepts of our work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transboundary cooperation is viewed as an essential element of Marine Spatial Planning (MSP). While much of the MSP literature focuses on the need for, and benefits of, transboundary MSP, this paper explores the political and institutional factors that may facilitate the effective transition to such an approach. Drawing on transboundary planning theory and practice, key contextual factors that are likely to expedite the transition to transboundary MSP are reviewed. These include: policy convergence in neighbouring jurisdictions; prior experience of transboundary planning; and good working relations amongst key actors. Based on this review, an assessment of the conditions for transboundary MSP in the adjoining waters of Northern Ireland and the Republic of Ireland is undertaken. A number of recommendations are then advanced for transboundary MSP on the island of Ireland, including, the need to address the role of formal transboundary institutions and the lack of an agreed legal maritime boundary. The paper concludes with some commentary on the political realities of implementing transboundary MSP.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As complex radiotherapy techniques become more readily-practiced, comprehensive 3D dosimetry is a growing necessity for advanced quality assurance. However, clinical implementation has been impeded by a wide variety of factors, including the expense of dedicated optical dosimeter readout tools, high operational costs, and the overall difficulty of use. To address these issues, a novel dry-tank optical CT scanner was designed for PRESAGE 3D dosimeter readout, relying on 3D printed components and omitting costly parts from preceding optical scanners. This work details the design, prototyping, and basic commissioning of the Duke Integrated-lens Optical Scanner (DIOS).

The convex scanning geometry was designed in ScanSim, an in-house Monte Carlo optical ray-tracing simulation. ScanSim parameters were used to build a 3D rendering of a convex ‘solid tank’ for optical-CT, which is capable of collimating a point light source into telecentric geometry without significant quantities of refractive-index matched fluid. The model was 3D printed, processed, and converted into a negative mold via rubber casting to produce a transparent polyurethane scanning tank. The DIOS was assembled with the solid tank, a 3W red LED light source, a computer-controlled rotation stage, and a 12-bit CCD camera. Initial optical phantom studies show negligible spatial inaccuracies in 2D projection images and 3D tomographic reconstructions. A PRESAGE 3D dose measurement for a 4-field box treatment plan from Eclipse shows 95% of voxels passing gamma analysis at 3%/3mm criteria. Gamma analysis between tomographic images of the same dosimeter in the DIOS and DLOS systems show 93.1% agreement at 5%/1mm criteria. From this initial study, the DIOS has demonstrated promise as an economically-viable optical-CT scanner. However, further improvements will be necessary to fully develop this system into an accurate and reliable tool for advanced QA.

Pre-clinical animal studies are used as a conventional means of translational research, as a midpoint between in-vitro cell studies and clinical implementation. However, modern small animal radiotherapy platforms are primitive in comparison with conventional linear accelerators. This work also investigates a series of 3D printed tools to expand the treatment capabilities of the X-RAD 225Cx orthovoltage irradiator, and applies them to a feasibility study of hippocampal avoidance in rodent whole-brain radiotherapy.

As an alternative material to lead, a novel 3D-printable tungsten-composite ABS plastic, GMASS, was tested to create precisely-shaped blocks. Film studies show virtually all primary radiation at 225 kVp can be attenuated by GMASS blocks of 0.5cm thickness. A state-of-the-art software, BlockGen, was used to create custom hippocampus-shaped blocks from medical image data, for any possible axial treatment field arrangement. A custom 3D printed bite block was developed to immobilize and position a supine rat for optimal hippocampal conformity. An immobilized rat CT with digitally-inserted blocks was imported into the SmART-Plan Monte-Carlo simulation software to determine the optimal beam arrangement. Protocols with 4 and 7 equally-spaced fields were considered as viable treatment options, featuring improved hippocampal conformity and whole-brain coverage when compared to prior lateral-opposed protocols. Custom rodent-morphic PRESAGE dosimeters were developed to accurately reflect these treatment scenarios, and a 3D dosimetry study was performed to confirm the SmART-Plan simulations. Measured doses indicate significant hippocampal sparing and moderate whole-brain coverage.