989 resultados para Robust operation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The multiquantum barrier (MQB), proposed by Iga et al in 1986, has been shown by several researchers to be an effective structure for improving the operating characteristics of laser diodes. These improvements include a reduction in the laser threshold current and increased characteristic temperatures. The operation of the MQB has been described as providing an increased barrier to electron overflow by reflecting high energy electrons trying to escape from the active region of the laser.This is achieved in a manner analogous to a Bragg reflector in optics. This thesis presents an investigation of the effectiveness of the MQB as an electron reflector. Numerical models have been developed for calculating the electron reflection due to MQB. Novel optical and electrical characterisation techniques have been used to try to measure an increase in barrier height due to the MQB in AlGaInP.It has been shown that the inclusion of MQB structures in bulk double heterostructure visible laser diodes can halve the threshold current above room temperature and the characteristic temperature of these lasers can be increased by up to 20K.These improvements are shown to occur in visible laser diodes even with the inclusion of theoretically ineffective MQB structures, hence the observed improvement in the characteristics of the laser diodes described above cannot be uniquely attributed to an increased barrier height due to enhance electron reflection. It is proposed here that the MQB improves the performance of laser diodes by proventing the diffusion of zinc into the active region of the laser. It is also proposed that the trapped zinc in the MQB region of the laser diode locally increases the p-type doping bringing the quasi-Fermi level for holes closer to the valence band edge thus increasing the barrier to electron overflow in the conduction band.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since Wireless Sensor Networks (WSNs) are subject to failures, fault-tolerance becomes an important requirement for many WSN applications. Fault-tolerance can be enabled in different areas of WSN design and operation, including the Medium Access Control (MAC) layer and the initial topology design. To be robust to failures, a MAC protocol must be able to adapt to traffic fluctuations and topology dynamics. We design ER-MAC that can switch from energy-efficient operation in normal monitoring to reliable and fast delivery for emergency monitoring, and vice versa. It also can prioritise high priority packets and guarantee fair packet deliveries from all sensor nodes. Topology design supports fault-tolerance by ensuring that there are alternative acceptable routes to data sinks when failures occur. We provide solutions for four topology planning problems: Additional Relay Placement (ARP), Additional Backup Placement (ABP), Multiple Sink Placement (MSP), and Multiple Sink and Relay Placement (MSRP). Our solutions use a local search technique based on Greedy Randomized Adaptive Search Procedures (GRASP). GRASP-ARP deploys relays for (k,l)-sink-connectivity, where each sensor node must have k vertex-disjoint paths of length ≤ l. To count how many disjoint paths a node has, we propose Counting-Paths. GRASP-ABP deploys fewer relays than GRASP-ARP by focusing only on the most important nodes – those whose failure has the worst effect. To identify such nodes, we define Length-constrained Connectivity and Rerouting Centrality (l-CRC). Greedy-MSP and GRASP-MSP place minimal cost sinks to ensure that each sensor node in the network is double-covered, i.e. has two length-bounded paths to two sinks. Greedy-MSRP and GRASP-MSRP deploy sinks and relays with minimal cost to make the network double-covered and non-critical, i.e. all sensor nodes must have length-bounded alternative paths to sinks when an arbitrary sensor node fails. We then evaluate the fault-tolerance of each topology in data gathering simulations using ER-MAC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to growing concerns regarding the anthropogenic interference with the climate system, countries across the world are being challenged to develop effective strategies to mitigate climate change by reducing or preventing greenhouse gas (GHG) emissions. The European Union (EU) is committed to contribute to this challenge by setting a number of climate and energy targets for the years 2020, 2030 and 2050 and then agreeing effort sharing amongst Member States. This thesis focus on one Member State, Ireland, which faces specific challenges and is not on track to meet the targets agreed to date. Before this work commenced, there were no projections of energy demand or supply for Ireland beyond 2020. This thesis uses techno-economic energy modelling instruments to address this knowledge gap. It builds and compares robust, comprehensive policy scenarios, providing a means of assessing the implications of different future energy and emissions pathways for the Irish economy, Ireland’s energy mix and the environment. A central focus of this thesis is to explore the dynamics of the energy system moving towards a low carbon economy. This thesis develops an energy systems model (the Irish TIMES model) to assess the implications of a range of energy and climate policy targets and target years. The thesis also compares the results generated from the least cost scenarios with official projections and target pathways and provides useful metrics and indications to identify key drivers and to support both policy makers and stakeholder in identifying cost optimal strategies. The thesis also extends the functionality of energy system modelling by developing and applying new methodologies to provide additional insights with a focus on particular issues that emerge from the scenario analysis carried out. Firstly, the thesis develops a methodology for soft-linking an energy systems model (Irish TIMES) with a power systems model (PLEXOS) to improve the interpretation of the electricity sector results in the energy system model. The soft-linking enables higher temporal resolution and improved characterisation of power plants and power system operation Secondly, the thesis develops a methodology for the integration of agriculture and energy systems modelling to enable coherent economy wide climate mitigation scenario analysis. This provides a very useful starting point for considering the trade-offs between the energy system and agriculture in the context of a low carbon economy and for enabling analysis of land-use competition. Three specific time scale perspectives are examined in this thesis (2020, 2030, 2050), aligning with key policy target time horizons. The results indicate that Ireland’s short term mandatory emissions reduction target will not be achieved without a significant reassessment of renewable energy policy and that the current dominant policy focus on wind-generated electricity is misplaced. In the medium to long term, the results suggest that energy efficiency is the first cost effective measure to deliver emissions reduction; biomass and biofuels are likely to be the most significant fuel source for Ireland in the context of a low carbon future prompting the need for a detailed assessment of possible implications for sustainability and competition with the agri-food sectors; significant changes are required in infrastructure to deliver deep emissions reductions (to enable the electrification of heat and transport, to accommodate carbon capture and storage facilities (CCS) and for biofuels); competition between energy and agriculture for land-use will become a key issue. The purpose of this thesis is to increase the evidence-based underpinning energy and climate policy decisions in Ireland. The methodology is replicable in other Member States.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The demand for optical bandwidth continues to increase year on year and is being driven primarily by entertainment services and video streaming to the home. Current photonic systems are coping with this demand by increasing data rates through faster modulation techniques, spectrally efficient transmission systems and by increasing the number of modulated optical channels per fibre strand. Such photonic systems are large and power hungry due to the high number of discrete components required in their operation. Photonic integration offers excellent potential for combining otherwise discrete system components together on a single device to provide robust, power efficient and cost effective solutions. In particular, the design of optical modulators has been an area of immense interest in recent times. Not only has research been aimed at developing modulators with faster data rates, but there has also a push towards making modulators as compact as possible. Mach-Zehnder modulators (MZM) have proven to be highly successful in many optical communication applications. However, due to the relatively weak electro-optic effect on which they are based, they remain large with typical device lengths of 4 to 7 mm while requiring a travelling wave structure for high-speed operation. Nested MZMs have been extensively used in the generation of advanced modulation formats, where multi-symbol transmission can be used to increase data rates at a given modulation frequency. Such nested structures have high losses and require both complex fabrication and packaging. In recent times, it has been shown that Electro-absorption modulators (EAMs) can be used in a specific arrangement to generate Quadrature Phase Shift Keying (QPSK) modulation. EAM based QPSK modulators have increased potential for integration and can be made significantly more compact than MZM based modulators. Such modulator designs suffer from losses in excess of 40 dB, which limits their use in practical applications. The work in this thesis has focused on how these losses can be reduced by using photonic integration. In particular, the integration of multiple lasers with the modulator structure was considered as an excellent means of reducing fibre coupling losses while maximising the optical power on chip. A significant difficultly when using multiple integrated lasers in such an arrangement was to ensure coherence between the integrated lasers. The work investigated in this thesis demonstrates for the first time how optical injection locking between discrete lasers on a single photonic integrated circuit (PIC) can be used in the generation of coherent optical signals. This was done by first considering the monolithic integration of lasers and optical couplers to form an on chip optical power splitter, before then examining the behaviour of a mutually coupled system of integrated lasers. By operating the system in a highly asymmetric coupling regime, a stable phase locking region was found between the integrated lasers. It was then shown that in this stable phase locked region the optical outputs of each laser were coherent with each other and phase locked to a common master laser.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Directed self-assembly (DSA) of block copolymers (BCPs) is a prime candidate to further extend dimensional scaling of silicon integrated circuit features for the nanoelectronic industry. Top-down optical techniques employed for photoresist patterning are predicted to reach an endpoint due to diffraction limits. Additionally, the prohibitive costs for “fabs” and high volume manufacturing tools are issues that have led the search for alternative complementary patterning processes. This thesis reports the fabrication of semiconductor features from nanoscale on-chip etch masks using “high χ” BCP materials. Fabrication of silicon and germanium nanofins via metal-oxide enhanced BCP on-chip etch masks that might be of importance for future Fin-field effect transistor (FinFETs) application are detailed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: In a time-course microarray experiment, the expression level for each gene is observed across a number of time-points in order to characterize the temporal trajectories of the gene-expression profiles. For many of these experiments, the scientific aim is the identification of genes for which the trajectories depend on an experimental or phenotypic factor. There is an extensive recent body of literature on statistical methodology for addressing this analytical problem. Most of the existing methods are based on estimating the time-course trajectories using parametric or non-parametric mean regression methods. The sensitivity of these regression methods to outliers, an issue that is well documented in the statistical literature, should be of concern when analyzing microarray data. RESULTS: In this paper, we propose a robust testing method for identifying genes whose expression time profiles depend on a factor. Furthermore, we propose a multiple testing procedure to adjust for multiplicity. CONCLUSIONS: Through an extensive simulation study, we will illustrate the performance of our method. Finally, we will report the results from applying our method to a case study and discussing potential extensions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a framework for robust optimization that relaxes the standard notion of robustness by allowing the decision maker to vary the protection level in a smooth way across the uncertainty set. We apply our approach to the problem of maximizing the expected value of a payoff function when the underlying distribution is ambiguous and therefore robustness is relevant. Our primary objective is to develop this framework and relate it to the standard notion of robustness, which deals with only a single guarantee across one uncertainty set. First, we show that our approach connects closely to the theory of convex risk measures. We show that the complexity of this approach is equivalent to that of solving a small number of standard robust problems. We then investigate the conservatism benefits and downside probability guarantees implied by this approach and compare to the standard robust approach. Finally, we illustrate theme thodology on an asset allocation example consisting of historical market data over a 25-year investment horizon and find in every case we explore that relaxing standard robustness with soft robustness yields a seemingly favorable risk-return trade-off: each case results in a higher out-of-sample expected return for a relatively minor degradation of out-of-sample downside performance. © 2010 INFORMS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.

This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.

On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.

In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.

We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,

and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.

In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

post-deadline paper

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mathematical models of straight-grate pellet induration processes have been developed and carefully validated by a number of workers over the past two decades. However, the subsequent exploitation of these models in process optimization is less clear, but obviously requires a sound understanding of how the key factors control the operation. In this article, we show how a thermokinetic model of pellet induration, validated against operating data from one of the Iron Ore Company of Canada (IOCC) lines in Canada, can be exploited in process optimization from the perspective of fuel efficiency, production rate, and product quality. Most existing processes are restricted in the options available for process optimization. Here, we review the role of each of the drying (D), preheating (PH), firing (F), after-firing (AF), and cooling (C) phases of the induration process. We then use the induration process model to evaluate whether the first drying zone is best to use on the up- or down-draft gas-flow stream, and we optimize the on-gas temperature profile in the hood of the PH, F, and AF zones, to reduce the burner fuel by at least 10 pct over the long term. Finally, we consider how efficient and flexible the process could be if some of the structural constraints were removed (i.e., addressed at the design stage). The analysis suggests it should be possible to reduce the burner fuel lead by 35 pct, easily increase production by 5+ pct, and improve pellet quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Existing election algorithms suffer limited scalability. This limit stems from the communication design which in turn stems from their fundamentally two-state behaviour. This paper presents a new election algorithm specifically designed to be highly scalable in broadcast networks whilst allowing any processing node to become coordinator with initially equal probability. To achieve this, careful attention has been paid to the communication design, and an additional state has been introduced. The design of the tri-state election algorithm has been motivated by the requirements analysis of a major research project to deliver robust scalable distributed applications, including load sharing, in hostile computing environments in which it is common for processing nodes to be rebooted frequently without notice. The new election algorithm is based in-part on a simple 'emergent' design. The science of emergence is of great relevance to developers of distributed applications because it describes how higher-level self-regulatory behaviour can arise from many participants following a small set of simple rules. The tri-state election algorithm is shown to have very low communication complexity in which the number of messages generated remains loosely-bounded regardless of scale for large systems; is highly scalable because nodes in the idle state do not transmit any messages; and because of its self-organising characteristics, is very stable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Natural distributed systems are adaptive, scalable and fault-tolerant. Emergence science describes how higher-level self-regulatory behaviour arises in natural systems from many participants following simple rulesets. Emergence advocates simple communication models, autonomy and independence, enhancing robustness and self-stabilization. High-quality distributed applications such as autonomic systems must satisfy the appropriate nonfunctional requirements which include scalability, efficiency, robustness, low-latency and stability. However the traditional design of distributed applications, especially in terms of the communication strategies employed, can introduce compromises between these characteristics. This paper discusses ways in which emergence science can be applied to distributed computing, avoiding some of the compromises associated with traditionally-designed applications. To demonstrate the effectiveness of this paradigm, an emergent election algorithm is described and its performance evaluated. The design incorporates nondeterministic behaviour. The resulting algorithm has very low communication complexity, and is simultaneously very stable, scalable and robust.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper considers a variant of the classical problem of minimizing makespan in a two-machine flow shop. In this variant, each job has three operations, where the first operation must be performed on the first machine, the second operation can be performed on either machine but cannot be preempted, and the third operation must be performed on the second machine. The NP-hard nature of the problem motivates the design and analysis of approximation algorithms. It is shown that a schedule in which the operations are sequenced arbitrarily, but without inserted machine idle time, has a worst-case performance ratio of 2. Also, an algorithm that constructs four schedules and selects the best is shown to have a worst-case performance ratio of 3/2. A polynomial time approximation scheme (PTAS) is also presented.