632 resultados para Penalty kick
Resumo:
We use a unique dataset with bank clients’ security holdings for all German banks to examine how macroeconomic shocks affect asset allocation preferences of households and non-financial firms. Our analysis focuses on two alternative mechanisms which can influence portfolio choice: wealth shocks, which are represented by the sovereign debt crisis in the Eurozone, and credit-supply shocks which arise from reductions in borrowing abilities during bank distress. We document het- erogeneous responses to these two types of shocks. While households with large holdings of secu- rities from stressed Eurozone countries (Greece, Ireland, Italy, Portugal, and Spain) decrease the degree of concentration in their security portfolio as a result of the Eurozone crisis, non-financial firms with similar levels of holdings from stressed Eurozone countries do not. Credit-supply shocks at the bank level (caused by bank distress) result in lower concentration, for both households and non-financial corporations. We also show that only shocks to corporate credit bear ramifications on bank clients’ portfolio concentration, while shocks in retail credit are inconsequential. Our results are robust to falsification tests, propensity score matching techniques, and instrumental variables estimation.
Resumo:
This article evaluates the way in which copyright infringement has been gradually shifting from an area of civil liability to one of criminal penalty. Traditionally, consideration of copyright issues has been undertaken from a predominantly legal and/or economic perspectives. Whereas traditional legal analysis can explain what legal changes are occurring, and what impact these changes may have, they may not effectively explain ‘how’ these changes have come to occur. The authors propose an alternative inter-disciplinary approach, combining legal analysis with critical security studies, which may help to explain in greater detail how policies in this field have developed. In particular, through applied securitisation theory, this article intends to demonstrate the appropriation of this field by a security discourse, and its consequences for societal and legal developments. In order to explore how the securitisation framework may be a valid approach to a subject such as copyright law and to determine the extent to which copyright law may be said to have been securitised, this article will begin by explaining the origins and main features of securitisation theory, and its applicability to legal study. The authors will then attempt to apply this framework to the development of a criminal law approach to copyright infringement, by focusing on the security escalation it has undergone, developing from an economic issue into one of international security. The analysis of this evolution will be mainly characterised by the securitisation moves taking place at national, European and international levels. Finally, a general reflection will be carried out on whether the securitisation of copyright has indeed been successful and on what the consequences of such a success could be.
Resumo:
We investigate numerically the effect of ultralong Raman laser fiber amplifier design parameters, such as span length, pumping distribution and grating reflectivity, on the RIN transfer from the pump to the transmitted signal. Comparison is provided to the performance of traditional second-order Raman amplified schemes, showing a relative performance penalty for ultralong laser systems that gets smaller as span length increases. We show that careful choice of system parameters can be used to partially offset such penalty. © 2010 Optical Society of America.
Resumo:
The thesis presents a detailed study of different Raman fibre laser (RFL) based amplification techniques and their applications in long-haul/unrepeatered coherent transmission systems. RFL based amplifications techniques were characterised from different aspects, including signal/noise power distributions, relative intensity noise (RIN), mode structures of induced Raman fibre lasers, and so on. It was found for the first time that RFL based amplification techniques could be divided into three categories in terms of the fibre laser regime, which were Fabry-Perot fibre laser with two FBGs, weak Fabry-Perot fibre laser with one FBG and very low reflection near the input, and random distributed feedback (DFB) fibre laser with one FBG. It was also found that lowering the reflection near the input could mitigate the RIN of the signal significantly, thanks to the reduced efficiency of the Stokes shift from the FW-propagated pump. In order to evaluate the transmission performance, different RFL based amplifiers were evaluated and optimised in long-haul coherent transmission systems. The results showed that Fabry-Perot fibre laser based amplifier with two FBGs gave >4.15 dB Q factor penalty using symmetrical bidirectional pumping, as the RIN of the signal was increased significantly. However, random distributed feedback fibre laser based amplifier with one FBG could mitigate the RIN of the signal, which enabled the use of bidirectional second order pumping and consequently give the best transmission performance up to 7915 km. Furthermore, using random DFB fibre laser based amplifier was proved to be effective to combat the nonlinear impairment, and the maximum reach was enhanced by >28% in mid-link single/dual band optical phase conjugator (OPC) transmission systems. In addition, unrepeatered transmission over >350 km fibre length using RFL based amplification technique were presented experimentally using DP-QPSK and DP-16QAM transmitter.
Resumo:
The “Nash program” initiated by Nash (Econometrica 21:128–140, 1953) is a research agenda aiming at representing every axiomatically determined cooperative solution to a game as a Nash outcome of a reasonable noncooperative bargaining game. The L-Nash solution first defined by Forgó (Interactive Decisions. Lecture Notes in Economics and Mathematical Systems, vol 229. Springer, Berlin, pp 1–15, 1983) is obtained as the limiting point of the Nash bargaining solution when the disagreement point goes to negative infinity in a fixed direction. In Forgó and Szidarovszky (Eur J Oper Res 147:108–116, 2003), the L-Nash solution was related to the solution of multiciteria decision making and two different axiomatizations of the L-Nash solution were also given in this context. In this paper, finite bounds are established for the penalty of disagreement in certain special two-person bargaining problems, making it possible to apply all the implementation models designed for Nash bargaining problems with a finite disagreement point to obtain the L-Nash solution as well. For another set of problems where this method does not work, a version of Rubinstein’s alternative offer game (Econometrica 50:97–109, 1982) is shown to asymptotically implement the L-Nash solution. If penalty is internalized as a decision variable of one of the players, then a modification of Howard’s game (J Econ Theory 56:142–159, 1992) also implements the L-Nash solution.
Resumo:
Jelen cikk célja annak mélyrehatóbb vizsgálata, hogy a felsőfokú végzettséggel rendelkező frissdiplomások nagyarányú létszámnövekedése milyen hatással volt a munkaerő-piaci helyzetükre, jelen esetben keresetükre. Homogén-e az egyetemet végzett hallgatók csoportja, vagy elkülöníthetők olyan alcsoportok, amelyben a végzett hallgatók kevesebbet keresnek jobb helyzetben lévő társaiknál? A fentebbi kérdés megválaszolására a Debreceni Egyetem 2007-ben és 2009-ben végzett hallgatóinak Diplomás Pályakövető Rendszeren keresztül nyert adatait használta fel a szerző. A tömegesedés egyik következménye lehet, hogy a felsőfokú végzettséggel rendelkező munkavállaló nem talál a végzettségének megfelelő munkát, és így kénytelen olyan munkakört betölteni, amelynek végzettségigénye alacsonyabb, mint az övé. Az ilyen módon túlképzett munkavállalók keresete alacsonyabb, mint hasonló végzettségű, de megfelelő munkakörben dolgozó társaiké. Ez a vizsgált minta tanúsága szerint a DE végzettjeinek esetében 12-17% körül alakult, ami megfelel a nemzetközi eredményeknek. _________ The main goal of this article to examine the effect of a large increase in the number of university graduates on their labour market position, mainly on their wages. Is the group of graduated students homogenous, or are there any subgroups in which graduates earn less than their counterparts? To answer this question, the author examines the database of the Graduate Students’ Survey which contains data about the students of University of Debrecen who finished their studies in 2007 and 2009. As a result of overeducation, graduates do not find the kind of jobs which require their level of education. These so called overeducated workers earn less than their counterparts. In this case, this wage penalty is between 12%-17%, which is similar to international results.
Resumo:
Death qualification is a part of voir dire that is unique to capital trials. Unlike all other litigation, capital jurors must affirm their willingness to impose both legal standards (either life in prison or the death penalty). Jurors who assert they are able to do so are deemed “death-qualified” and are eligible for capital jury service: jurors who assert that they are unable to do so are deemed “excludable” or “scrupled” and are barred from hearing a death penalty case. During the penalty phase in capital trials, death-qualified jurors weigh the aggravators (i.e., arguments for death) against the mitigators (i.e., arguments for life) in order to determine the sentence. If the aggravating circumstances outweigh the mitigating circumstances, then the jury is to recommend death; if the mitigating circumstances outweigh the aggravating circumstances, then the jury is to recommend life. The jury is free to weigh each aggravating and mitigating circumstance in any matter they see fit. Previous research has found that death qualification impacts jurors' receptiveness to aggravating and mitigating circumstances (e.g., Luginbuhl & Middendorf, 1988). However, these studies utilized the now-defunct Witherspoon rule and did not include a case scenario for participants to reference. The purpose of this study was to investigate whether death qualification affects jurors' endorsements of aggravating and mitigating circumstances when Witt, rather than Witherspoon, is the legal standard for death qualification. Four hundred and fifty venirepersons from the 11 th Judicial Circuit in Miami, Florida completed a booklet of stimulus materials that contained the following: two death qualification questions; a case scenario that included a summary of the guilt and penalty phases of a capital case; a 26-item measure that required participants to endorse aggravators, nonstatutory mitigators, and statutory mitigators on a 6-point Likert scale; and standard demographic questions. Results indicated that death-qualified venirepersons, when compared to excludables, were more likely to endorse aggravating circumstances. Excludable participants, when compared to death-qualified venirepersons, were more likely to endorse nonstatutory mitigators. There was no significant difference between death-qualified and excludable venirepersons with respect to their endorsement of 6 out of 7 statutory mitigators. It would appear that the Furman v. Georgia (1972) decision to declare the death penalty unconstitutional is frustrated by the Lockhart v. McCree (1986) affirmation of death qualification. ^
Resumo:
The reinforcing effects of diverse tactile stimuli were examined in this study. The study had two purposes. First, this study expanded on the Pelaez-Nogueras, Field, Gewirtz, Cigales, Gonzalez, Sanchez and Clasky (1997) finding that stroking increases infants' gaze duration, and smiling and vocalization frequencies more than tickling/poking. Instead of presenting poking and tickling as a single stimulus combination, this study separated poking and tickling in order to measure the effects of each component separately. Further, the effects of poking, tickling/tapping and stroking intensity (i.e., tactile pressure) were compared by having both mild and intense conditions. Second, this study compared the reinforcing efficacy of mother-delivered tactile stimulation to that of infant-originated tactile exploration. Twelve infants from 2- to 5-months of age participated in this study. The experiment was conducted using a repeated measures A-B-A-C-A-D reversal design. The A phases signified baselines and reversals. The B, C, and D phases consisted of alternating treatments (either mild stroking vs. mild poking vs. mild tickling/tapping, intense stroking vs. intense poking vs. intense tickling/tapping, or mother-delivered tactile stimulation vs. infant-originated tactile exploration). Three experimental hypotheses were assessed: (1) infant leg kick rate would be greater when it produced stroking or tickling/tapping (presumptive positive reinforcers), than when it produced poking (a possible punisher), regardless of tactile pressure; (2) infant leg kick rate would be greater when it produced a more intense level of stroking or tickling/tapping and lower when it produced intense poking compared to mild poking; (3) infant leg-kick rate would be greater for mother-delivered tactile stimulation than for infant-originated tactile exploration. Visual inspection and inferential statistical methods were used to analyze the results. The data supported the first two hypotheses. Mixed support emerged for the third hypothesis. This study made several important contributions to the field of psychology. First, this was the first study to quantify the pressure of tactile stimulation, via a pressure meter developed by the researcher. Additionally, the results of this study yielded valuable information about the effects of different modalities of touch. ^
Resumo:
Polynomial phase modulated (PPM) signals have been shown to provide improved error rate performance with respect to conventional modulation formats under additive white Gaussian noise and fading channels in single-input single-output (SISO) communication systems. In this dissertation, systems with two and four transmit antennas using PPM signals were presented. In both cases we employed full-rate space-time block codes in order to take advantage of the multipath channel. For two transmit antennas, we used the orthogonal space-time block code (OSTBC) proposed by Alamouti and performed symbol-wise decoding by estimating the phase coefficients of the PPM signal using three different methods: maximum-likelihood (ML), sub-optimal ML (S-ML) and the high-order ambiguity function (HAF). In the case of four transmit antennas, we used the full-rate quasi-OSTBC (QOSTBC) proposed by Jafarkhani. However, in order to ensure the best error rate performance, PPM signals were selected such as to maximize the QOSTBC’s minimum coding gain distance (CGD). Since this method does not always provide a unique solution, an additional criterion known as maximum channel interference coefficient (CIC) was proposed. Through Monte Carlo simulations it was shown that by using QOSTBCs along with the properly selected PPM constellations based on the CGD and CIC criteria, full diversity in flat fading channels and thus, low BER at high signal-to-noise ratios (SNR) can be ensured. Lastly, the performance of symbol-wise decoding for QOSTBCs was evaluated. In this case a quasi zero-forcing method was used to decouple the received signal and it was shown that although this technique reduces the decoding complexity of the system, there is a penalty to be paid in terms of error rate performance at high SNRs.
Resumo:
The increasing emphasis on mass customization, shortened product lifecycles, synchronized supply chains, when coupled with advances in information system, is driving most firms towards make-to-order (MTO) operations. Increasing global competition, lower profit margins, and higher customer expectations force the MTO firms to plan its capacity by managing the effective demand. The goal of this research was to maximize the operational profits of a make-to-order operation by selectively accepting incoming customer orders and simultaneously allocating capacity for them at the sales stage. ^ For integrating the two decisions, a Mixed-Integer Linear Program (MILP) was formulated which can aid an operations manager in an MTO environment to select a set of potential customer orders such that all the selected orders are fulfilled by their deadline. The proposed model combines order acceptance/rejection decision with detailed scheduling. Experiments with the formulation indicate that for larger problem sizes, the computational time required to determine an optimal solution is prohibitive. This formulation inherits a block diagonal structure, and can be decomposed into one or more sub-problems (i.e. one sub-problem for each customer order) and a master problem by applying Dantzig-Wolfe’s decomposition principles. To efficiently solve the original MILP, an exact Branch-and-Price algorithm was successfully developed. Various approximation algorithms were developed to further improve the runtime. Experiments conducted unequivocally show the efficiency of these algorithms compared to a commercial optimization solver.^ The existing literature addresses the static order acceptance problem for a single machine environment having regular capacity with an objective to maximize profits and a penalty for tardiness. This dissertation has solved the order acceptance and capacity planning problem for a job shop environment with multiple resources. Both regular and overtime resources is considered. ^ The Branch-and-Price algorithms developed in this dissertation are faster and can be incorporated in a decision support system which can be used on a daily basis to help make intelligent decisions in a MTO operation.^
Resumo:
We present our approach to real-time service-oriented scheduling problems with the objective of maximizing the total system utility. Different from the traditional utility accrual scheduling problems that each task is associated with only a single time utility function (TUF), we associate two different TUFs—a profit TUF and a penalty TUF—with each task, to model the real-time services that not only need to reward the early completions but also need to penalize the abortions or deadline misses. The scheduling heuristics we proposed in this paper judiciously accept, schedule, and abort real-time services when necessary to maximize the accrued utility. Our extensive experimental results show that our proposed algorithms can significantly outperform the traditional scheduling algorithms such as the Earliest Deadline First (EDF), the traditional utility accrual (UA) scheduling algorithms, and an earlier scheduling approach based on a similar model.
Resumo:
The increasing emphasis on mass customization, shortened product lifecycles, synchronized supply chains, when coupled with advances in information system, is driving most firms towards make-to-order (MTO) operations. Increasing global competition, lower profit margins, and higher customer expectations force the MTO firms to plan its capacity by managing the effective demand. The goal of this research was to maximize the operational profits of a make-to-order operation by selectively accepting incoming customer orders and simultaneously allocating capacity for them at the sales stage. For integrating the two decisions, a Mixed-Integer Linear Program (MILP) was formulated which can aid an operations manager in an MTO environment to select a set of potential customer orders such that all the selected orders are fulfilled by their deadline. The proposed model combines order acceptance/rejection decision with detailed scheduling. Experiments with the formulation indicate that for larger problem sizes, the computational time required to determine an optimal solution is prohibitive. This formulation inherits a block diagonal structure, and can be decomposed into one or more sub-problems (i.e. one sub-problem for each customer order) and a master problem by applying Dantzig-Wolfe’s decomposition principles. To efficiently solve the original MILP, an exact Branch-and-Price algorithm was successfully developed. Various approximation algorithms were developed to further improve the runtime. Experiments conducted unequivocally show the efficiency of these algorithms compared to a commercial optimization solver. The existing literature addresses the static order acceptance problem for a single machine environment having regular capacity with an objective to maximize profits and a penalty for tardiness. This dissertation has solved the order acceptance and capacity planning problem for a job shop environment with multiple resources. Both regular and overtime resources is considered. The Branch-and-Price algorithms developed in this dissertation are faster and can be incorporated in a decision support system which can be used on a daily basis to help make intelligent decisions in a MTO operation.
Resumo:
This work presents a new model for the Heterogeneous p-median Problem (HPM), proposed to recover the hidden category structures present in the data provided by a sorting task procedure, a popular approach to understand heterogeneous individual’s perception of products and brands. This new model is named as the Penalty-free Heterogeneous p-median Problem (PFHPM), a single-objective version of the original problem, the HPM. The main parameter in the HPM is also eliminated, the penalty factor. It is responsible for the weighting of the objective function terms. The adjusting of this parameter controls the way that the model recovers the hidden category structures present in data, and depends on a broad knowledge of the problem. Additionally, two complementary formulations for the PFHPM are shown, both mixed integer linear programming problems. From these additional formulations lower-bounds were obtained for the PFHPM. These values were used to validate a specialized Variable Neighborhood Search (VNS) algorithm, proposed to solve the PFHPM. This algorithm provided good quality solutions for the PFHPM, solving artificial generated instances from a Monte Carlo Simulation and real data instances, even with limited computational resources. Statistical analyses presented in this work suggest that the new algorithm and model, the PFHPM, can recover more accurately the original category structures related to heterogeneous individual’s perceptions than the original model and algorithm, the HPM. Finally, an illustrative application of the PFHPM is presented, as well as some insights about some new possibilities for it, extending the new model to fuzzy environments