543 resultados para Penalty kicks


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate numerically the effect of ultralong Raman laser fiber amplifier design parameters, such as span length, pumping distribution and grating reflectivity, on the RIN transfer from the pump to the transmitted signal. Comparison is provided to the performance of traditional second-order Raman amplified schemes, showing a relative performance penalty for ultralong laser systems that gets smaller as span length increases. We show that careful choice of system parameters can be used to partially offset such penalty. © 2010 Optical Society of America.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis presents a detailed study of different Raman fibre laser (RFL) based amplification techniques and their applications in long-haul/unrepeatered coherent transmission systems. RFL based amplifications techniques were characterised from different aspects, including signal/noise power distributions, relative intensity noise (RIN), mode structures of induced Raman fibre lasers, and so on. It was found for the first time that RFL based amplification techniques could be divided into three categories in terms of the fibre laser regime, which were Fabry-Perot fibre laser with two FBGs, weak Fabry-Perot fibre laser with one FBG and very low reflection near the input, and random distributed feedback (DFB) fibre laser with one FBG. It was also found that lowering the reflection near the input could mitigate the RIN of the signal significantly, thanks to the reduced efficiency of the Stokes shift from the FW-propagated pump. In order to evaluate the transmission performance, different RFL based amplifiers were evaluated and optimised in long-haul coherent transmission systems. The results showed that Fabry-Perot fibre laser based amplifier with two FBGs gave >4.15 dB Q factor penalty using symmetrical bidirectional pumping, as the RIN of the signal was increased significantly. However, random distributed feedback fibre laser based amplifier with one FBG could mitigate the RIN of the signal, which enabled the use of bidirectional second order pumping and consequently give the best transmission performance up to 7915 km. Furthermore, using random DFB fibre laser based amplifier was proved to be effective to combat the nonlinear impairment, and the maximum reach was enhanced by >28% in mid-link single/dual band optical phase conjugator (OPC) transmission systems. In addition, unrepeatered transmission over >350 km fibre length using RFL based amplification technique were presented experimentally using DP-QPSK and DP-16QAM transmitter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The “Nash program” initiated by Nash (Econometrica 21:128–140, 1953) is a research agenda aiming at representing every axiomatically determined cooperative solution to a game as a Nash outcome of a reasonable noncooperative bargaining game. The L-Nash solution first defined by Forgó (Interactive Decisions. Lecture Notes in Economics and Mathematical Systems, vol 229. Springer, Berlin, pp 1–15, 1983) is obtained as the limiting point of the Nash bargaining solution when the disagreement point goes to negative infinity in a fixed direction. In Forgó and Szidarovszky (Eur J Oper Res 147:108–116, 2003), the L-Nash solution was related to the solution of multiciteria decision making and two different axiomatizations of the L-Nash solution were also given in this context. In this paper, finite bounds are established for the penalty of disagreement in certain special two-person bargaining problems, making it possible to apply all the implementation models designed for Nash bargaining problems with a finite disagreement point to obtain the L-Nash solution as well. For another set of problems where this method does not work, a version of Rubinstein’s alternative offer game (Econometrica 50:97–109, 1982) is shown to asymptotically implement the L-Nash solution. If penalty is internalized as a decision variable of one of the players, then a modification of Howard’s game (J Econ Theory 56:142–159, 1992) also implements the L-Nash solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Jelen cikk célja annak mélyrehatóbb vizsgálata, hogy a felsőfokú végzettséggel rendelkező frissdiplomások nagyarányú létszámnövekedése milyen hatással volt a munkaerő-piaci helyzetükre, jelen esetben keresetükre. Homogén-e az egyetemet végzett hallgatók csoportja, vagy elkülöníthetők olyan alcsoportok, amelyben a végzett hallgatók kevesebbet keresnek jobb helyzetben lévő társaiknál? A fentebbi kérdés megválaszolására a Debreceni Egyetem 2007-ben és 2009-ben végzett hallgatóinak Diplomás Pályakövető Rendszeren keresztül nyert adatait használta fel a szerző. A tömegesedés egyik következménye lehet, hogy a felsőfokú végzettséggel rendelkező munkavállaló nem talál a végzettségének megfelelő munkát, és így kénytelen olyan munkakört betölteni, amelynek végzettségigénye alacsonyabb, mint az övé. Az ilyen módon túlképzett munkavállalók keresete alacsonyabb, mint hasonló végzettségű, de megfelelő munkakörben dolgozó társaiké. Ez a vizsgált minta tanúsága szerint a DE végzettjeinek esetében 12-17% körül alakult, ami megfelel a nemzetközi eredményeknek. _________ The main goal of this article to examine the effect of a large increase in the number of university graduates on their labour market position, mainly on their wages. Is the group of graduated students homogenous, or are there any subgroups in which graduates earn less than their counterparts? To answer this question, the author examines the database of the Graduate Students’ Survey which contains data about the students of University of Debrecen who finished their studies in 2007 and 2009. As a result of overeducation, graduates do not find the kind of jobs which require their level of education. These so called overeducated workers earn less than their counterparts. In this case, this wage penalty is between 12%-17%, which is similar to international results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Death qualification is a part of voir dire that is unique to capital trials. Unlike all other litigation, capital jurors must affirm their willingness to impose both legal standards (either life in prison or the death penalty). Jurors who assert they are able to do so are deemed “death-qualified” and are eligible for capital jury service: jurors who assert that they are unable to do so are deemed “excludable” or “scrupled” and are barred from hearing a death penalty case. During the penalty phase in capital trials, death-qualified jurors weigh the aggravators (i.e., arguments for death) against the mitigators (i.e., arguments for life) in order to determine the sentence. If the aggravating circumstances outweigh the mitigating circumstances, then the jury is to recommend death; if the mitigating circumstances outweigh the aggravating circumstances, then the jury is to recommend life. The jury is free to weigh each aggravating and mitigating circumstance in any matter they see fit. Previous research has found that death qualification impacts jurors' receptiveness to aggravating and mitigating circumstances (e.g., Luginbuhl & Middendorf, 1988). However, these studies utilized the now-defunct Witherspoon rule and did not include a case scenario for participants to reference. The purpose of this study was to investigate whether death qualification affects jurors' endorsements of aggravating and mitigating circumstances when Witt, rather than Witherspoon, is the legal standard for death qualification. Four hundred and fifty venirepersons from the 11 th Judicial Circuit in Miami, Florida completed a booklet of stimulus materials that contained the following: two death qualification questions; a case scenario that included a summary of the guilt and penalty phases of a capital case; a 26-item measure that required participants to endorse aggravators, nonstatutory mitigators, and statutory mitigators on a 6-point Likert scale; and standard demographic questions. Results indicated that death-qualified venirepersons, when compared to excludables, were more likely to endorse aggravating circumstances. Excludable participants, when compared to death-qualified venirepersons, were more likely to endorse nonstatutory mitigators. There was no significant difference between death-qualified and excludable venirepersons with respect to their endorsement of 6 out of 7 statutory mitigators. It would appear that the Furman v. Georgia (1972) decision to declare the death penalty unconstitutional is frustrated by the Lockhart v. McCree (1986) affirmation of death qualification. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Polynomial phase modulated (PPM) signals have been shown to provide improved error rate performance with respect to conventional modulation formats under additive white Gaussian noise and fading channels in single-input single-output (SISO) communication systems. In this dissertation, systems with two and four transmit antennas using PPM signals were presented. In both cases we employed full-rate space-time block codes in order to take advantage of the multipath channel. For two transmit antennas, we used the orthogonal space-time block code (OSTBC) proposed by Alamouti and performed symbol-wise decoding by estimating the phase coefficients of the PPM signal using three different methods: maximum-likelihood (ML), sub-optimal ML (S-ML) and the high-order ambiguity function (HAF). In the case of four transmit antennas, we used the full-rate quasi-OSTBC (QOSTBC) proposed by Jafarkhani. However, in order to ensure the best error rate performance, PPM signals were selected such as to maximize the QOSTBC’s minimum coding gain distance (CGD). Since this method does not always provide a unique solution, an additional criterion known as maximum channel interference coefficient (CIC) was proposed. Through Monte Carlo simulations it was shown that by using QOSTBCs along with the properly selected PPM constellations based on the CGD and CIC criteria, full diversity in flat fading channels and thus, low BER at high signal-to-noise ratios (SNR) can be ensured. Lastly, the performance of symbol-wise decoding for QOSTBCs was evaluated. In this case a quasi zero-forcing method was used to decouple the received signal and it was shown that although this technique reduces the decoding complexity of the system, there is a penalty to be paid in terms of error rate performance at high SNRs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing emphasis on mass customization, shortened product lifecycles, synchronized supply chains, when coupled with advances in information system, is driving most firms towards make-to-order (MTO) operations. Increasing global competition, lower profit margins, and higher customer expectations force the MTO firms to plan its capacity by managing the effective demand. The goal of this research was to maximize the operational profits of a make-to-order operation by selectively accepting incoming customer orders and simultaneously allocating capacity for them at the sales stage. ^ For integrating the two decisions, a Mixed-Integer Linear Program (MILP) was formulated which can aid an operations manager in an MTO environment to select a set of potential customer orders such that all the selected orders are fulfilled by their deadline. The proposed model combines order acceptance/rejection decision with detailed scheduling. Experiments with the formulation indicate that for larger problem sizes, the computational time required to determine an optimal solution is prohibitive. This formulation inherits a block diagonal structure, and can be decomposed into one or more sub-problems (i.e. one sub-problem for each customer order) and a master problem by applying Dantzig-Wolfe’s decomposition principles. To efficiently solve the original MILP, an exact Branch-and-Price algorithm was successfully developed. Various approximation algorithms were developed to further improve the runtime. Experiments conducted unequivocally show the efficiency of these algorithms compared to a commercial optimization solver.^ The existing literature addresses the static order acceptance problem for a single machine environment having regular capacity with an objective to maximize profits and a penalty for tardiness. This dissertation has solved the order acceptance and capacity planning problem for a job shop environment with multiple resources. Both regular and overtime resources is considered. ^ The Branch-and-Price algorithms developed in this dissertation are faster and can be incorporated in a decision support system which can be used on a daily basis to help make intelligent decisions in a MTO operation.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present our approach to real-time service-oriented scheduling problems with the objective of maximizing the total system utility. Different from the traditional utility accrual scheduling problems that each task is associated with only a single time utility function (TUF), we associate two different TUFs—a profit TUF and a penalty TUF—with each task, to model the real-time services that not only need to reward the early completions but also need to penalize the abortions or deadline misses. The scheduling heuristics we proposed in this paper judiciously accept, schedule, and abort real-time services when necessary to maximize the accrued utility. Our extensive experimental results show that our proposed algorithms can significantly outperform the traditional scheduling algorithms such as the Earliest Deadline First (EDF), the traditional utility accrual (UA) scheduling algorithms, and an earlier scheduling approach based on a similar model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing emphasis on mass customization, shortened product lifecycles, synchronized supply chains, when coupled with advances in information system, is driving most firms towards make-to-order (MTO) operations. Increasing global competition, lower profit margins, and higher customer expectations force the MTO firms to plan its capacity by managing the effective demand. The goal of this research was to maximize the operational profits of a make-to-order operation by selectively accepting incoming customer orders and simultaneously allocating capacity for them at the sales stage. For integrating the two decisions, a Mixed-Integer Linear Program (MILP) was formulated which can aid an operations manager in an MTO environment to select a set of potential customer orders such that all the selected orders are fulfilled by their deadline. The proposed model combines order acceptance/rejection decision with detailed scheduling. Experiments with the formulation indicate that for larger problem sizes, the computational time required to determine an optimal solution is prohibitive. This formulation inherits a block diagonal structure, and can be decomposed into one or more sub-problems (i.e. one sub-problem for each customer order) and a master problem by applying Dantzig-Wolfe’s decomposition principles. To efficiently solve the original MILP, an exact Branch-and-Price algorithm was successfully developed. Various approximation algorithms were developed to further improve the runtime. Experiments conducted unequivocally show the efficiency of these algorithms compared to a commercial optimization solver. The existing literature addresses the static order acceptance problem for a single machine environment having regular capacity with an objective to maximize profits and a penalty for tardiness. This dissertation has solved the order acceptance and capacity planning problem for a job shop environment with multiple resources. Both regular and overtime resources is considered. The Branch-and-Price algorithms developed in this dissertation are faster and can be incorporated in a decision support system which can be used on a daily basis to help make intelligent decisions in a MTO operation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work presents a new model for the Heterogeneous p-median Problem (HPM), proposed to recover the hidden category structures present in the data provided by a sorting task procedure, a popular approach to understand heterogeneous individual’s perception of products and brands. This new model is named as the Penalty-free Heterogeneous p-median Problem (PFHPM), a single-objective version of the original problem, the HPM. The main parameter in the HPM is also eliminated, the penalty factor. It is responsible for the weighting of the objective function terms. The adjusting of this parameter controls the way that the model recovers the hidden category structures present in data, and depends on a broad knowledge of the problem. Additionally, two complementary formulations for the PFHPM are shown, both mixed integer linear programming problems. From these additional formulations lower-bounds were obtained for the PFHPM. These values were used to validate a specialized Variable Neighborhood Search (VNS) algorithm, proposed to solve the PFHPM. This algorithm provided good quality solutions for the PFHPM, solving artificial generated instances from a Monte Carlo Simulation and real data instances, even with limited computational resources. Statistical analyses presented in this work suggest that the new algorithm and model, the PFHPM, can recover more accurately the original category structures related to heterogeneous individual’s perceptions than the original model and algorithm, the HPM. Finally, an illustrative application of the PFHPM is presented, as well as some insights about some new possibilities for it, extending the new model to fuzzy environments

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Automation of managed pressure drilling (MPD) enhances the safety and increases efficiency of drilling and that drives the development of controllers and observers for MPD. The objective is to maintain the bottom hole pressure (BHP) within the pressure window formed by the reservoir pressure and fracture pressure and also to reject kicks. Practical MPD automation solutions must address the nonlinearities and uncertainties caused by the variations in mud flow rate, choke opening, friction factor, mud density, etc. It is also desired that if pressure constraints are violated the controller must take appropriate actions to reject the ensuing kick. The objectives are addressed by developing two controllers: a gain switching robust controller and a nonlinear model predictive controller (NMPC). The robust gain switching controller is designed using H1 loop shaping technique, which was implemented using high gain bumpless transfer and 2D look up table. Six candidate controllers were designed in such a way they preserve robustness and performance for different choke openings and flow rates. It is demonstrated that uniform performance is maintained under different operating conditions and the controllers are able to reject kicks using pressure control and maintain BHP during drill pipe extension. The NMPC was designed to regulate the BHP and contain the outlet flow rate within certain tunable threshold. The important feature of that controller is that it can reject kicks without requiring any switching and thus there is no scope for shattering due to switching between pressure and flow control. That is achieved by exploiting the constraint handling capability of NMPC. Active set method was used for computing control inputs. It is demonstrated that NMPC is able to contain kicks and maintain BHP during drill pipe extension.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Underwater georeferenced photo-transect surveys were conducted on December 10-15, 2011 at various sections of the reef at Lizard Island, Great Barrier Reef. For this survey a snorkeler or diver swam over the bottom while taking photos of the benthos at a set height using a standard digital camera and towing a GPS in a surface float which logged the track every five seconds. A standard digital compact camera was placed in an underwater housing and fitted with a 16 mm lens which provided a 1.0 m x 1.0 m footprint, at 0.5 m height above the benthos. Horizontal distance between photos was estimated by three fin kicks of the survey diver/snorkeler, which corresponded to a surface distance of approximately 2.0 - 4.0 m. The GPS was placed in a dry-bag and logged the position as it floated at the surface while being towed by the photographer. A total of 5,735 benthic photos were taken. A floating GPS setup connected to the swimmer/diver by a line enabled recording of coordinates of each benthic photo (Roelfsema 2009). Approximation of coordinates of each benthic photo was conducted based on the photo timestamp and GPS coordinate time stamp, using GPS Photo Link Software (www.geospatialexperts.com). Coordinates of each photo were interpolated by finding the GPS coordinates that were logged at a set time before and after the photo was captured. Benthic or substrate cover data was derived from each photo by randomly placing 24 points over each image using the Coral Point Count for Microsoft Excel program (Kohler and Gill, 2006). Each point was then assigned to 1 of 78 cover types, which represented the benthic feature beneath it. Benthic cover composition summary of each photo scores was generated automatically using CPCE program. The resulting benthic cover data of each photo was linked to GPS coordinates, saved as an ArcMap point shapefile, and projected to Universal Transverse Mercator WGS84 Zone 55 South.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Underwater georeferenced photo-transect surveys were conducted on October 3-7, 2012 at various sections of the reef and lagoon at Lizard Island, Great Barrier Reef. For this survey a snorkeler swam while taking photos of the benthos at a set distance from the benthos using a standard digital camera and towing a GPS in a surface float which logged the track every five seconds. A Canon G12 digital camera was placed in a Canon underwater housing and photos were taken at 1 m height above the benthos. Horizontal distance between photos was estimated by three fin kicks of the survey snorkeler, which corresponded to a surface distance of approximately 2.0 - 4.0 m. The GPS was placed in a dry bag and logged the position at the surface while being towed by the photographer (Roelfsema, 2009). A total of 1,265 benthic photos were taken. Approximation of coordinates of each benthic photo was conducted based on the photo timestamp and GPS coordinate time stamp, using GPS Photo Link Software (www.geospatialexperts.com). Coordinates of each photo were interpolated by finding the GPS coordinates that were logged at a set time before and after the photo was captured. Benthic or substrate cover data was derived from each photo by randomly placing 24 points over each image using the Coral Point Count for Microsoft Excel program (Kohler and Gill, 2006). Each point was then assigned to 1 of 79 cover types, which represented the benthic feature beneath it. Benthic cover composition summary of each photo scores was generated automatically using CPCE program. The resulting benthic cover data of each photo was linked to GPS coordinates, saved as an ArcMap point shapefile, and projected to Universal Transverse Mercator WGS84 Zone 55 South.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An object based image analysis approach (OBIA) was used to create a habitat map of the Lizard Reef. Briefly, georeferenced dive and snorkel photo-transect surveys were conducted at different locations surrounding Lizard Island, Australia. For the surveys, a snorkeler or diver swam over the bottom at a depth of 1-2m in the lagoon, One Tree Beach and Research Station areas, and 7m depth in Watson's Bay, while taking photos of the benthos at a set height using a standard digital camera and towing a surface float GPS which was logging its track every five seconds. The camera lens provided a 1.0 m x 1.0 m footprint, at 0.5 m height above the benthos. Horizontal distance between photos was estimated by fin kicks, and corresponded to a surface distance of approximately 2.0 - 4.0 m. Approximation of coordinates of each benthic photo was done based on the photo timestamp and GPS coordinate time stamp, using GPS Photo Link Software (www.geospatialexperts.com). Coordinates of each photo were interpolated by finding the gps coordinates that were logged at a set time before and after the photo was captured. Dominant benthic or substrate cover type was assigned to each photo by placing 24 points random over each image using the Coral Point Count excel program (Kohler and Gill, 2006). Each point was then assigned a dominant cover type using a benthic cover type classification scheme containing nine first-level categories - seagrass high (>=70%), seagrass moderate (40-70%), seagrass low (<= 30%), coral, reef matrix, algae, rubble, rock and sand. Benthic cover composition summaries of each photo were generated automatically in CPCe. The resulting benthic cover data for each photo was linked to GPS coordinates, saved as an ArcMap point shapefile, and projected to Universal Transverse Mercator WGS84 Zone 56 South. The OBIA class assignment followed a hierarchical assignment based on membership rules with levels for "reef", "geomorphic zone" and "benthic community" (above).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relatório de Estágio apresentado para a obtenção do grau de Mestre em Desporto com especialização em Treino Desportivo – Futebol