798 resultados para Penalty Clause
Resumo:
Desalination is a costly means of providing freshwater. Most desalination plants use either reverse osmosis (RO) or thermal distillation. Both processes have drawbacks: RO is efficient but uses expensive electrical energy; thermal distillation is inefficient but uses less expensive thermal energy. This work aims to provide an efficient RO plant that uses thermal energy. A steam-Rankine cycle has been designed to drive mechanically a batch-RO system that achieves high recovery, without the high energy penalty typically incurred in a continuous-RO system. The steam may be generated by solar panels, biomass boilers, or as an industrial by-product. A novel mechanical arrangement has been designed for low cost, and a steam-jacketed arrangement has been designed for isothermal expansion and improved thermodynamic efficiency. Based on detailed heat transfer and cost calculations, a gain output ratio of 69-162 is predicted, enabling water to be treated at a cost of 71 Indian Rupees/m3 at small scale. Costs will reduce with scale-up. Plants may be designed for a wide range of outputs, from 5 m3/day, up to commercial versions producing 300 m3/day of clean water from brackish groundwater.
Resumo:
This article evaluates the way in which copyright infringement has been gradually shifting from an area of civil liability to one of criminal penalty. Traditionally, consideration of copyright issues has been undertaken from a predominantly legal and/or economic perspectives. Whereas traditional legal analysis can explain what legal changes are occurring, and what impact these changes may have, they may not effectively explain ‘how’ these changes have come to occur. The authors propose an alternative inter-disciplinary approach, combining legal analysis with critical security studies, which may help to explain in greater detail how policies in this field have developed. In particular, through applied securitisation theory, this article intends to demonstrate the appropriation of this field by a security discourse, and its consequences for societal and legal developments. In order to explore how the securitisation framework may be a valid approach to a subject such as copyright law and to determine the extent to which copyright law may be said to have been securitised, this article will begin by explaining the origins and main features of securitisation theory, and its applicability to legal study. The authors will then attempt to apply this framework to the development of a criminal law approach to copyright infringement, by focusing on the security escalation it has undergone, developing from an economic issue into one of international security. The analysis of this evolution will be mainly characterised by the securitisation moves taking place at national, European and international levels. Finally, a general reflection will be carried out on whether the securitisation of copyright has indeed been successful and on what the consequences of such a success could be.
Resumo:
We investigate numerically the effect of ultralong Raman laser fiber amplifier design parameters, such as span length, pumping distribution and grating reflectivity, on the RIN transfer from the pump to the transmitted signal. Comparison is provided to the performance of traditional second-order Raman amplified schemes, showing a relative performance penalty for ultralong laser systems that gets smaller as span length increases. We show that careful choice of system parameters can be used to partially offset such penalty. © 2010 Optical Society of America.
Resumo:
The thesis presents a detailed study of different Raman fibre laser (RFL) based amplification techniques and their applications in long-haul/unrepeatered coherent transmission systems. RFL based amplifications techniques were characterised from different aspects, including signal/noise power distributions, relative intensity noise (RIN), mode structures of induced Raman fibre lasers, and so on. It was found for the first time that RFL based amplification techniques could be divided into three categories in terms of the fibre laser regime, which were Fabry-Perot fibre laser with two FBGs, weak Fabry-Perot fibre laser with one FBG and very low reflection near the input, and random distributed feedback (DFB) fibre laser with one FBG. It was also found that lowering the reflection near the input could mitigate the RIN of the signal significantly, thanks to the reduced efficiency of the Stokes shift from the FW-propagated pump. In order to evaluate the transmission performance, different RFL based amplifiers were evaluated and optimised in long-haul coherent transmission systems. The results showed that Fabry-Perot fibre laser based amplifier with two FBGs gave >4.15 dB Q factor penalty using symmetrical bidirectional pumping, as the RIN of the signal was increased significantly. However, random distributed feedback fibre laser based amplifier with one FBG could mitigate the RIN of the signal, which enabled the use of bidirectional second order pumping and consequently give the best transmission performance up to 7915 km. Furthermore, using random DFB fibre laser based amplifier was proved to be effective to combat the nonlinear impairment, and the maximum reach was enhanced by >28% in mid-link single/dual band optical phase conjugator (OPC) transmission systems. In addition, unrepeatered transmission over >350 km fibre length using RFL based amplification technique were presented experimentally using DP-QPSK and DP-16QAM transmitter.
Resumo:
The “Nash program” initiated by Nash (Econometrica 21:128–140, 1953) is a research agenda aiming at representing every axiomatically determined cooperative solution to a game as a Nash outcome of a reasonable noncooperative bargaining game. The L-Nash solution first defined by Forgó (Interactive Decisions. Lecture Notes in Economics and Mathematical Systems, vol 229. Springer, Berlin, pp 1–15, 1983) is obtained as the limiting point of the Nash bargaining solution when the disagreement point goes to negative infinity in a fixed direction. In Forgó and Szidarovszky (Eur J Oper Res 147:108–116, 2003), the L-Nash solution was related to the solution of multiciteria decision making and two different axiomatizations of the L-Nash solution were also given in this context. In this paper, finite bounds are established for the penalty of disagreement in certain special two-person bargaining problems, making it possible to apply all the implementation models designed for Nash bargaining problems with a finite disagreement point to obtain the L-Nash solution as well. For another set of problems where this method does not work, a version of Rubinstein’s alternative offer game (Econometrica 50:97–109, 1982) is shown to asymptotically implement the L-Nash solution. If penalty is internalized as a decision variable of one of the players, then a modification of Howard’s game (J Econ Theory 56:142–159, 1992) also implements the L-Nash solution.
Resumo:
Jelen cikk célja annak mélyrehatóbb vizsgálata, hogy a felsőfokú végzettséggel rendelkező frissdiplomások nagyarányú létszámnövekedése milyen hatással volt a munkaerő-piaci helyzetükre, jelen esetben keresetükre. Homogén-e az egyetemet végzett hallgatók csoportja, vagy elkülöníthetők olyan alcsoportok, amelyben a végzett hallgatók kevesebbet keresnek jobb helyzetben lévő társaiknál? A fentebbi kérdés megválaszolására a Debreceni Egyetem 2007-ben és 2009-ben végzett hallgatóinak Diplomás Pályakövető Rendszeren keresztül nyert adatait használta fel a szerző. A tömegesedés egyik következménye lehet, hogy a felsőfokú végzettséggel rendelkező munkavállaló nem talál a végzettségének megfelelő munkát, és így kénytelen olyan munkakört betölteni, amelynek végzettségigénye alacsonyabb, mint az övé. Az ilyen módon túlképzett munkavállalók keresete alacsonyabb, mint hasonló végzettségű, de megfelelő munkakörben dolgozó társaiké. Ez a vizsgált minta tanúsága szerint a DE végzettjeinek esetében 12-17% körül alakult, ami megfelel a nemzetközi eredményeknek. _________ The main goal of this article to examine the effect of a large increase in the number of university graduates on their labour market position, mainly on their wages. Is the group of graduated students homogenous, or are there any subgroups in which graduates earn less than their counterparts? To answer this question, the author examines the database of the Graduate Students’ Survey which contains data about the students of University of Debrecen who finished their studies in 2007 and 2009. As a result of overeducation, graduates do not find the kind of jobs which require their level of education. These so called overeducated workers earn less than their counterparts. In this case, this wage penalty is between 12%-17%, which is similar to international results.
Resumo:
Death qualification is a part of voir dire that is unique to capital trials. Unlike all other litigation, capital jurors must affirm their willingness to impose both legal standards (either life in prison or the death penalty). Jurors who assert they are able to do so are deemed “death-qualified” and are eligible for capital jury service: jurors who assert that they are unable to do so are deemed “excludable” or “scrupled” and are barred from hearing a death penalty case. During the penalty phase in capital trials, death-qualified jurors weigh the aggravators (i.e., arguments for death) against the mitigators (i.e., arguments for life) in order to determine the sentence. If the aggravating circumstances outweigh the mitigating circumstances, then the jury is to recommend death; if the mitigating circumstances outweigh the aggravating circumstances, then the jury is to recommend life. The jury is free to weigh each aggravating and mitigating circumstance in any matter they see fit. Previous research has found that death qualification impacts jurors' receptiveness to aggravating and mitigating circumstances (e.g., Luginbuhl & Middendorf, 1988). However, these studies utilized the now-defunct Witherspoon rule and did not include a case scenario for participants to reference. The purpose of this study was to investigate whether death qualification affects jurors' endorsements of aggravating and mitigating circumstances when Witt, rather than Witherspoon, is the legal standard for death qualification. Four hundred and fifty venirepersons from the 11 th Judicial Circuit in Miami, Florida completed a booklet of stimulus materials that contained the following: two death qualification questions; a case scenario that included a summary of the guilt and penalty phases of a capital case; a 26-item measure that required participants to endorse aggravators, nonstatutory mitigators, and statutory mitigators on a 6-point Likert scale; and standard demographic questions. Results indicated that death-qualified venirepersons, when compared to excludables, were more likely to endorse aggravating circumstances. Excludable participants, when compared to death-qualified venirepersons, were more likely to endorse nonstatutory mitigators. There was no significant difference between death-qualified and excludable venirepersons with respect to their endorsement of 6 out of 7 statutory mitigators. It would appear that the Furman v. Georgia (1972) decision to declare the death penalty unconstitutional is frustrated by the Lockhart v. McCree (1986) affirmation of death qualification. ^
Resumo:
Polynomial phase modulated (PPM) signals have been shown to provide improved error rate performance with respect to conventional modulation formats under additive white Gaussian noise and fading channels in single-input single-output (SISO) communication systems. In this dissertation, systems with two and four transmit antennas using PPM signals were presented. In both cases we employed full-rate space-time block codes in order to take advantage of the multipath channel. For two transmit antennas, we used the orthogonal space-time block code (OSTBC) proposed by Alamouti and performed symbol-wise decoding by estimating the phase coefficients of the PPM signal using three different methods: maximum-likelihood (ML), sub-optimal ML (S-ML) and the high-order ambiguity function (HAF). In the case of four transmit antennas, we used the full-rate quasi-OSTBC (QOSTBC) proposed by Jafarkhani. However, in order to ensure the best error rate performance, PPM signals were selected such as to maximize the QOSTBC’s minimum coding gain distance (CGD). Since this method does not always provide a unique solution, an additional criterion known as maximum channel interference coefficient (CIC) was proposed. Through Monte Carlo simulations it was shown that by using QOSTBCs along with the properly selected PPM constellations based on the CGD and CIC criteria, full diversity in flat fading channels and thus, low BER at high signal-to-noise ratios (SNR) can be ensured. Lastly, the performance of symbol-wise decoding for QOSTBCs was evaluated. In this case a quasi zero-forcing method was used to decouple the received signal and it was shown that although this technique reduces the decoding complexity of the system, there is a penalty to be paid in terms of error rate performance at high SNRs.
Resumo:
In the discussion - The Nevada Gaming Debt Collection Experience - by Larry D. Strate, Assistant Professor, College of Business and Economics at the University of Nevada, Las Vegas, Assistant Professor Strate initially outlines the article by saying: “Even though Nevada has had over a century of legalized gaming experience, the evolution of gaming debt collection has been a recent phenomenon. The author traces that history and discusses implications of the current law.” The discussion opens with a comparison between the gaming industries of New Jersey/Atlantic City, and Las Vegas, Nevada. This contrast serves to point out the disparities in debt handling between the two. “There are major differences in the development of legalized gaming for both Nevada and Atlantic City. Nevada has had over a century of legalized gambling; Atlantic City, New Jersey, has completed a decade of its operation,” Strate informs you. “Nevada's gaming industry has been its primary economic base for many years; Atlantic City's entry into gaming served as a possible solution to a social problem. Nevada's processes of legalized gaming, credit play, and the collection of gaming debts were developed over a period of 125 years; Atlantic City's new industry began with gaming, gaming credit, and gaming debt collection simultaneously in 1976 [via the New Jersey Casino Control Act] .” The irony here is that Atlantic City, being the younger venue, had or has a better system for handling debt collection than do the historic and traditional Las Vegas properties. Many of these properties were duplicated in New Jersey, so the dichotomy existed whereby New Jersey casinos could recoup debt while their Nevada counterparts could not. “It would seem logical that a "territory" which permitted gambling in the early 1800’s would have allowed the Nevada industry to collect its debts as any other legal enterprise. But it did not,” Strate says. Of course, this situation could not be allowed to continue and Strate outlines the evolution. New Jersey tactfully benefitted from Nevada’s experience. “The fundamental change in gaming debt collection came through the legislature as the judicial decisions had declared gaming debts uncollectable by either a patron or a casino,” Strate informs you. “Nevada enacted its gaming debt collection act in 1983, six years after New Jersey,” Strate points out. One of the most noteworthy paragraphs in the entire article is this: “The fundamental change in 1983, and probably the most significant change in the history of gaming in Nevada since the enactment of the Open Gaming Law of 1931, was to allow non-restricted gaming licensees* to recover gaming debts evidenced by a credit instrument. The new law incorporated previously litigated terms with a new one, credit instrument.” The term is legally definable and gives Nevada courts an avenue of due process.
Resumo:
The increasing emphasis on mass customization, shortened product lifecycles, synchronized supply chains, when coupled with advances in information system, is driving most firms towards make-to-order (MTO) operations. Increasing global competition, lower profit margins, and higher customer expectations force the MTO firms to plan its capacity by managing the effective demand. The goal of this research was to maximize the operational profits of a make-to-order operation by selectively accepting incoming customer orders and simultaneously allocating capacity for them at the sales stage. ^ For integrating the two decisions, a Mixed-Integer Linear Program (MILP) was formulated which can aid an operations manager in an MTO environment to select a set of potential customer orders such that all the selected orders are fulfilled by their deadline. The proposed model combines order acceptance/rejection decision with detailed scheduling. Experiments with the formulation indicate that for larger problem sizes, the computational time required to determine an optimal solution is prohibitive. This formulation inherits a block diagonal structure, and can be decomposed into one or more sub-problems (i.e. one sub-problem for each customer order) and a master problem by applying Dantzig-Wolfe’s decomposition principles. To efficiently solve the original MILP, an exact Branch-and-Price algorithm was successfully developed. Various approximation algorithms were developed to further improve the runtime. Experiments conducted unequivocally show the efficiency of these algorithms compared to a commercial optimization solver.^ The existing literature addresses the static order acceptance problem for a single machine environment having regular capacity with an objective to maximize profits and a penalty for tardiness. This dissertation has solved the order acceptance and capacity planning problem for a job shop environment with multiple resources. Both regular and overtime resources is considered. ^ The Branch-and-Price algorithms developed in this dissertation are faster and can be incorporated in a decision support system which can be used on a daily basis to help make intelligent decisions in a MTO operation.^
Resumo:
We present our approach to real-time service-oriented scheduling problems with the objective of maximizing the total system utility. Different from the traditional utility accrual scheduling problems that each task is associated with only a single time utility function (TUF), we associate two different TUFs—a profit TUF and a penalty TUF—with each task, to model the real-time services that not only need to reward the early completions but also need to penalize the abortions or deadline misses. The scheduling heuristics we proposed in this paper judiciously accept, schedule, and abort real-time services when necessary to maximize the accrued utility. Our extensive experimental results show that our proposed algorithms can significantly outperform the traditional scheduling algorithms such as the Earliest Deadline First (EDF), the traditional utility accrual (UA) scheduling algorithms, and an earlier scheduling approach based on a similar model.
Resumo:
The increasing emphasis on mass customization, shortened product lifecycles, synchronized supply chains, when coupled with advances in information system, is driving most firms towards make-to-order (MTO) operations. Increasing global competition, lower profit margins, and higher customer expectations force the MTO firms to plan its capacity by managing the effective demand. The goal of this research was to maximize the operational profits of a make-to-order operation by selectively accepting incoming customer orders and simultaneously allocating capacity for them at the sales stage. For integrating the two decisions, a Mixed-Integer Linear Program (MILP) was formulated which can aid an operations manager in an MTO environment to select a set of potential customer orders such that all the selected orders are fulfilled by their deadline. The proposed model combines order acceptance/rejection decision with detailed scheduling. Experiments with the formulation indicate that for larger problem sizes, the computational time required to determine an optimal solution is prohibitive. This formulation inherits a block diagonal structure, and can be decomposed into one or more sub-problems (i.e. one sub-problem for each customer order) and a master problem by applying Dantzig-Wolfe’s decomposition principles. To efficiently solve the original MILP, an exact Branch-and-Price algorithm was successfully developed. Various approximation algorithms were developed to further improve the runtime. Experiments conducted unequivocally show the efficiency of these algorithms compared to a commercial optimization solver. The existing literature addresses the static order acceptance problem for a single machine environment having regular capacity with an objective to maximize profits and a penalty for tardiness. This dissertation has solved the order acceptance and capacity planning problem for a job shop environment with multiple resources. Both regular and overtime resources is considered. The Branch-and-Price algorithms developed in this dissertation are faster and can be incorporated in a decision support system which can be used on a daily basis to help make intelligent decisions in a MTO operation.
Resumo:
From the second half of the twentieth century the state bega n to use exaction beyond your fiscalist character, also as a means of alignment deformities economic and social balance, influencing in different directions, according to economic, social and political policy. It is what is usually called the extrafiscalit y. It is in light of this phenomenon and the constitutional perspective, the present work aims to analyze item IV of article. 8 of Law n. 6.967/96, regulatory Property Tax Vehicle Automotive (property taxes) in the State of Rio Grande do Norte, in view of its possible incompatibility with the principles of the Basic Statute and with international guidelines for protection of the environment The problem of this research is Seated in art. 225 of the Constitution, which provides that everyone has the right to an ecologically balanced environment. From the reading of this standard, extracted it is the responsibility of the state protecting the environment, which requires the adoption of suitable actions to that end. However, we look to state law cited follows th e constitutional path, since it exempts the collection of property taxes automotive vehicles with over 10 years of manufacturing, which could encourage the conservation of a fleet of old vehicles, mostly more polluting and harmful to the environment and hu man health. Would the state legislature oblivious to the constitutional principles and the global trend of environmental preservation? Thus questions whether such an incentive for more polluting vehicles, emitting more gases in the atmosphere. Moreover, th e international community is already moving through important conventions in an attempt to minimize and control global warming and climate change. Predicting the theme in CF/88 demonstrates that the country is no stranger to the issue. Thus, the work is a retelling of Law No. 6.967/96 order to check whether it is compatible with the existing system. The methodology consists of a documentary, deductive, dialectical literature. At the end of the survey, it was found that provide a tax benefit to these vehicle s is encouraged to maintain them in circulation and contribute to the increase in air and noise pollution, in addition to the traffic problems generated. Thus, this potiguar anything standard can be expressed extrafiscality because the medium and long term there is encouragement and worsening environmental problem. Despite the ability to pay clause, but this remission is an affront to legally protected interests. Thus, this device goes in reverse order compared to the values of the legal system and in relat ion to sustainable development. Modern Tax Law should be used as a tool to achieve the purposes collimated by the State, and not otherwise. It was noticed that the vast majority of Brazilian states does not follow this rule, including Mato Grosso and Minas Gerais have no such exemption. Therefore, the RN State does not constitute a model for sustainable public policies, nor example of environmental protection by state law.
Resumo:
This work presents a new model for the Heterogeneous p-median Problem (HPM), proposed to recover the hidden category structures present in the data provided by a sorting task procedure, a popular approach to understand heterogeneous individual’s perception of products and brands. This new model is named as the Penalty-free Heterogeneous p-median Problem (PFHPM), a single-objective version of the original problem, the HPM. The main parameter in the HPM is also eliminated, the penalty factor. It is responsible for the weighting of the objective function terms. The adjusting of this parameter controls the way that the model recovers the hidden category structures present in data, and depends on a broad knowledge of the problem. Additionally, two complementary formulations for the PFHPM are shown, both mixed integer linear programming problems. From these additional formulations lower-bounds were obtained for the PFHPM. These values were used to validate a specialized Variable Neighborhood Search (VNS) algorithm, proposed to solve the PFHPM. This algorithm provided good quality solutions for the PFHPM, solving artificial generated instances from a Monte Carlo Simulation and real data instances, even with limited computational resources. Statistical analyses presented in this work suggest that the new algorithm and model, the PFHPM, can recover more accurately the original category structures related to heterogeneous individual’s perceptions than the original model and algorithm, the HPM. Finally, an illustrative application of the PFHPM is presented, as well as some insights about some new possibilities for it, extending the new model to fuzzy environments
Resumo:
This paper presents an experimental study of the sensitivity to 15-MeV neutrons of Advanced Low Power SRAMs (A-LPSRAM) at low bias voltage little above the threshold value that allows the retention of data. This family of memories is characterized by a 3D structure to minimize the area penalty and to cope with latchups, as well as by the presence of integrated capacitors to hinder the occurrence of single event upsets. In low voltage static tests, classical single event upsets were a minor source of errors, but other unexpected phenomena such as clusters of bitflips and hard errors turned out to be the origin of hundreds of bitflips. Besides, errors were not observed in dynamic tests at nominal voltage. This behavior is clearly different than that of standard bulk CMOS SRAMs, where thousands of errors have been reported.