972 resultados para Default penalties


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wavelength tunable electro-absorption modulated distributed Bragg reflector lasers (TEMLs) are promising light source in dense wavelength division multiplexing (DWDM) optical fiber communication system due to high modulation speed, small chirp, low drive voltage, compactness and fast wavelength tuning ability. Thus, increased the transmission capacity, the functionality and the flexibility are provided. Materials with bandgap difference as large as 250nm have been integrated on the same wafer by a combined technique of selective area growth (SAG) and quantum well intermixing (QWI), which supplies a flexible and controllable platform for the need of photonic integrated circuits (PIC). A TEML has been fabricated by this technique for the first time. The component has superior characteristics as following: threshold current of 37mA, output power of 3.5mW at 100mA injection and 0V modulator bias voltage, extinction ratio of more than 20 dB with modulator reverse voltage from 0V to 2V when coupled into a single mode fiber, and wavelength tuning range of 4.4nm covering 6 100-GHz WDM channels. A clearly open eye diagram is observed when the integrated EAM is driven with a 10-Gb/s electrical NRZ signal. A good transmission characteristic is exhibited with power penalties less than 2.2 dB at a bit error ratio (BER) of 10(-10) after 44.4 km standard fiber transmission.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transmission Volume Phase Holographic Grating (VPHG) is adopted as spectral element in the real-time Optical Channel Performance Monitor (OCPM), which is in dire need in the Dense Wavelength -division-multiplexing(DATDM) system. And the tolerance of incident angle, which can be fully determined by two angles: 6 and (p, is finally inferred in this paper. Commonly, the default setting is that the incident plane is perpendicular to the fringes when the incident angle is mentioned. Now the situation out of the vertical is discussed. By combining the theoretic analysis of VPHG with its use in OCPM and changing 6 and (0 precisely in the computation and experiment, the two physical quantities which can fully specify the performance of VPHG the diffraction efficiency and the resolution, are analyzed. The results show that the diffraction efficiency varies greatly with the change of 6 or (p. But from the view of the whole C-band, only the peak diffraction efficiency drifts to another wavelength. As for the resolution, it deteriorates more rapidly than diffraction efficiency with the change of (p, while more slowly with the change of theta. Only if \phi\less than or equal to+/-1degrees and alpha(B) -0.5 less than or equal to theta less than or equal to alpha(B) + 0.5, the performance of the VPHG would be good enough to be used in OCPM system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The simulating wave nearshore (SWAN) wave model has been widely used in coastal areas, lakes and estuaries. However, we found a poor agreement between modeling results and measurements in analyzing the chosen four typical cases when we used the default parameters of the source function formulas of the SWAN to make wave simulation for the Bohai Sea. Also, it was found that at the same wind process the simulated results of two wind generation expressions (Komen, Janssen) demonstrated a large difference. Further study showed that the proportionality coefficient alpha in linear growth term of wave growth source function plays an unperceived role in the process of wave development. Based on experiments and analysis, we thought that the coefficient alpha should change rather than be a constant. Therefore, the coefficient alpha changing with the variation of friction velocity U (*) was introduced into the linear growth term of wave growth source function. Four weather processes were adopted to validate the improvement in the linear growth term. The results from the improved coefficient alpha agree much better with the measurements than those from the default constant coefficient alpha. Furthermore, the large differences of results between Komen wind generation expression and Janssen wind generation expression were eliminated. We also experimented with the four weather processes to test the new white-capping mechanisms based on the cumulative steepness method. It was found that the parameters of the new white-capping mechanisms are not suitable for the Bohai Sea, but Alkyon's white-capping mechanisms can be applicable to the Bohai Sea after amendments, demonstrating that this improvement of parameter alpha can improve the simulated results of the Bohai Sea.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seismic exploration is the main tools of exploration for petroleum. as the society needs more petroleum and the level of exploration is going up, the exploration in the area of complex geology construction is the main task in oil industry, so the seismic prestack depth migration appeared, it has good ability for complex construction imaging. Its result depends on the velocity model strongly. So for seismic prestack depth migration has become the main research area. In this thesis the difference in seismic prestack depth migration between our country and the abroad has been analyzed in system. the tomographical method with no layer velocity model, the residual curve velocity analysical method based on velocity model and the deleting method in pre-processing have been developed. In the thesis, the tomographysical method in velocity analysis is been analyzed at first. It characterized with perfection in theory and diffculity in application. This method use the picked first arrivial, compare the difference between the picked first arrival and the calculated arrival in theory velocity model, and then anti-projected the difference along the ray path to get the new velocity model. This method only has the hypothesis of high frequency, no other hypothesis. So it is very effective and has high efficiency. But this method has default still. The picking of first arrival is difficult in the prestack data. The reasons are the ratio of signal to noise is very low and many other event cross each other in prestack data. These phenomenon appear strongly in the complex geology construction area. Based on these a new tomophysical methos in velocity analysis with no layer velocity model is been developed. The aim is to solve the picking problem. It do not need picking the event time contiunely. You can picking in random depending on the reliability. This methos not only need the pick time as the routine tomographysical mehtod, but also the slope of event. In this methos we use the high slope analysis method to improve the precision of picking. In addition we also make research on the residual curve velocity analysis and find that its application is not good and the efficiency is low. The reasons is that the hypothesis is rigid and it is a local optimizing method, it can solve seismic velocity problem in the area with laterical strong velocity variation. A new method is developed to improve the precision of velocity model building . So far the pattern of seismic prestack depth migration is the same as it aborad. Before the work of velocity building the original seismic data must been corrected on a datum plane, and then to make the prestack depth migration work. As we know the successful example is in Mexico bay. It characterized with the simple surface layer construction, the pre-precessing is very simple and its precision is very high. But in our country the main seismic work is in land, the surface layer is very complex, in some area the error of pre-precessing is big, it affect the velocity building. So based on this a new method is developed to delete the per-precessing error and improve the precision of velocity model building. Our main work is, (1) developing a effective tomographical velocity building method with no layer velocity model. (2) a new high resolution slope analysis method is developed. (3) developing a global optimized residual curve velocity buliding method based on velocity model. (4) a effective method of deleting the pre-precessing error is developing. All the method as listed above has been ceritified by the theorical calculation and the actual seismic data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Eight experiments tested how object array structure and learning location influenced the establishing and utilization of self-to-object and object-to-object spatial representations in locomotion and reorientation. In Experiment 1 to 4, participants learned either at the periphery of or amidst regular or irregular object array, and then pointed to objects while blindfolded in three conditions: before turning (baseline), after rotating 240 degrees (updating), and after disorientation (disorientation). In Experiment 5 to 8, participants received instruction to keep track of self-to-object or object-to-object spatial representations before rotation. In each condition, the configuration error, which means the standard deviation of the means per target object of the signed pointing errors, was calculated as the index of the fidelity of representation used in each condition. Results indicate that participants form both self-to-object and object-to-object spatial representations after learning an object-array. Object-array structure influences the selection of representation during updating. By default, object-to-object spatial representation is updated when people learned the regular object-array structure, and self-to-object spatial representation is updated when people learned the irregular object array. But people could also update the other representation when they are required to do so. The fidelity of representations will confine this kind of “switch”. People could only “switch” from a low fidelity representation to a high fidelity representation or between two representations of similar fidelity. They couldn’t “switch” from a high fidelity representation to a low fidelity representation. Leaning location might influence the fidelity of representations. When people learned at the periphery of object array, they could acquire both self-to-object and object-to-object spatial representations of high fidelity. But when people learned amidst the object array, they could only acquire self-to-object spatial representation of high fidelity, and the fidelity of object-to-object spatial representation was low.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents experimental results that aimed to investigate the effects of soil liquefaction on the modal parameters (i.e. frequency and damping ratio) of pile-supported structures. The tests were carried out using the shaking table facility of the Bristol Laboratory for Advanced Dynamics Engineering (BLADE) at the University of Bristol (UK) whereby four pile-supported structures (two single piles and two pile groups) with and without superstructure mass were tested. The experimental investigation aimed to monitor the variation in natural frequency and damping of the four physical models at different degrees of excess pore water pressure generation and in full-liquefaction condition. The experimental results showed that the natural frequency of pile-supported structures may decrease considerably owing to the loss of lateral support offered by the soil to the pile. On the other hand, the damping ratio of structure may increase to values in excess of 20%. These findings have important design consequences: (a) for low-period structures, substantial reduction of spectral acceleration is expected; (b) during and after liquefaction, the response of the system may be dictated by the interactions of multiple loadings, that is, horizontal, axial and overturning moment, which were negligible prior to liquefaction; and (c) with the onset of liquefaction due to increased flexibility of pile-supported structure, larger spectral displacement may be expected, which in turn may enhance Pdelta effects and consequently amplification of overturning moment. Practical implications for pile design are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For the past fifty years, the interest in issues beyond pure philology has been a watchword in comparative literary studies. Comparative studies, which by default employ a variety of methods, run the major risk – as the experience of American comparative literature shows – of descending into dangerous ‘everythingism’ or losing its identity. However, it performs well when literature remains one of the segments of comparison. In such instances, it proves efficacious in exploring the ‘correspondences of arts’, the problems of identity and multiculturalism as well as contributes to the research into the transfer of ideas. Hence, it delves into phenomena which exist on the borderlines of literature, fine arts and other fields of humanities, employing strategies of interpretation which are typical for each of those fields. This means that in the process there emerges a “borderline methodology”, whose distinctive feature is heterogeneity of conducting research. This, in turn, requires the scholar to be both ingenious and creative while selecting topics as well as to possess competence in literary studies and the related field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Internet has brought unparalleled opportunities for expanding availability of research by bringing down economic and physical barriers to sharing. The digitally networked environment promises to democratize access, carry knowledge beyond traditional research niches, accelerate discovery, encourage new and interdisciplinary approaches to ever more complex research challenges, and enable new computational research strategies. However, despite these opportunities for increasing access to knowledge, the prices of scholarly journals have risen sharply over the past two decades, often forcing libraries to cancel subscriptions. Today even the wealthiest institutions cannot afford to sustain all of the journals needed by their faculties and students. To take advantage of the opportunities created by the Internet and to further their mission of creating, preserving, and disseminating knowledge, many academic institutions are taking steps to capture the benefits of more open research sharing. Colleges and universities have built digital repositories to preserve and distribute faculty scholarly articles and other research outputs. Many individual authors have taken steps to retain the rights they need, under copyright law, to allow their work to be made freely available on the Internet and in their institutionâ s repository. And, faculties at some institutions have adopted resolutions endorsing more open access to scholarly articles. Most recently, on February 12, 2008, the Faculty of Arts and Sciences (FAS) at Harvard University took a landmark step. The faculty voted to adopt a policy requiring that faculty authors send an electronic copy of their scholarly articles to the universityâ s digital repository and that faculty authors automatically grant copyright permission to the university to archive and to distribute these articles unless a faculty member has waived the policy for a particular article. Essentially, the faculty voted to make open access to the results of their published journal articles the default policy for the Faculty of Arts and Sciences of Harvard University. As of March 2008, a proposal is also under consideration in the University of California system by which faculty authors would commit routinely to grant copyright permission to the university to make copies of the facultyâ s scholarly work openly accessible over the Internet. Inspired by the example set by the Harvard faculty, this White Paper is addressed to the faculty and administrators of academic institutions who support equitable access to scholarly research and knowledge, and who believe that the institution can play an important role as steward of the scholarly literature produced by its faculty. This paper discusses both the motivation and the process for establishing a binding institutional policy that automatically grants a copyright license from each faculty member to permit deposit of his or her peer-reviewed scholarly articles in institutional repositories, from which the works become available for others to read and cite.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. We ask a fundamental question: What is the basic predictive power of TCP of network state, including wireless error conditions? The goal is to improve or readily exploit this predictive power to enable TCP (or variants) to perform well in generalized network settings. To that end, we use Maximum Likelihood Ratio tests to evaluate TCP as a detector/estimator. We quantify how well network state can be estimated, given network response such as distributions of packet delays or TCP throughput that are conditioned on the type of packet loss. Using our model-based approach and extensive simulations, we demonstrate that congestion-induced losses and losses due to wireless transmission errors produce sufficiently different statistics upon which an efficient detector can be built; distributions of network loads can provide effective means for estimating packet loss type; and packet delay is a better signal of network state than short-term throughput. We demonstrate how estimation accuracy is influenced by different proportions of congestion versus wireless losses and penalties on incorrect estimation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

(This Technical Report revises TR-BUCS-2003-011) The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. In this paper, we investigate a Bayesian approach to infer at the source host the reason of a packet loss, whether congestion or wireless transmission error. Our approach is "mostly" end-to-end since it requires only one long-term average quantity (namely, long-term average packet loss probability over the wireless segment) that may be best obtained with help from the network (e.g. wireless access agent).Specifically, we use Maximum Likelihood Ratio tests to evaluate TCP as a classifier of the type of packet loss. We study the effectiveness of short-term classification of packet errors (congestion vs. wireless), given stationary prior error probabilities and distributions of packet delays conditioned on the type of packet loss (measured over a larger time scale). Using our Bayesian-based approach and extensive simulations, we demonstrate that congestion-induced losses and losses due to wireless transmission errors produce sufficiently different statistics upon which an efficient online error classifier can be built. We introduce a simple queueing model to underline the conditional delay distributions arising from different kinds of packet losses over a heterogeneous wired/wireless path. We show how Hidden Markov Models (HMMs) can be used by a TCP connection to infer efficiently conditional delay distributions. We demonstrate how estimation accuracy is influenced by different proportions of congestion versus wireless losses and penalties on incorrect classification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent measurement based studies reveal that most of the Internet connections are short in terms of the amount of traffic they carry (mice), while a small fraction of the connections are carrying a large portion of the traffic (elephants). A careful study of the TCP protocol shows that without help from an Active Queue Management (AQM) policy, short connections tend to lose to long connections in their competition for bandwidth. This is because short connections do not gain detailed knowledge of the network state, and therefore they are doomed to be less competitive due to the conservative nature of the TCP congestion control algorithm. Inspired by the Differentiated Services (Diffserv) architecture, we propose to give preferential treatment to short connections inside the bottleneck queue, so that short connections experience less packet drop rate than long connections. This is done by employing the RIO (RED with In and Out) queue management policy which uses different drop functions for different classes of traffic. Our simulation results show that: (1) in a highly loaded network, preferential treatment is necessary to provide short TCP connections with better response time and fairness without hurting the performance of long TCP connections; (2) the proposed scheme still delivers packets in FIFO manner at each link, thus it maintains statistical multiplexing gain and does not misorder packets; (3) choosing a smaller default initial timeout value for TCP can help enhance the performance of short TCP flows, however not as effectively as our scheme and at the risk of congestion collapse; (4) in the worst case, our proposal works as well as a regular RED scheme, in terms of response time and goodput.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Overlay networks have been used for adding and enhancing functionality to the end-users without requiring modifications in the Internet core mechanisms. Overlay networks have been used for a variety of popular applications including routing, file sharing, content distribution, and server deployment. Previous work has focused on devising practical neighbor selection heuristics under the assumption that users conform to a specific wiring protocol. This is not a valid assumption in highly decentralized systems like overlay networks. Overlay users may act selfishly and deviate from the default wiring protocols by utilizing knowledge they have about the network when selecting neighbors to improve the performance they receive from the overlay. This thesis goes against the conventional thinking that overlay users conform to a specific protocol. The contributions of this thesis are threefold. It provides a systematic evaluation of the design space of selfish neighbor selection strategies in real overlays, evaluates the performance of overlay networks that consist of users that select their neighbors selfishly, and examines the implications of selfish neighbor and server selection to overlay protocol design and service provisioning respectively. This thesis develops a game-theoretic framework that provides a unified approach to modeling Selfish Neighbor Selection (SNS) wiring procedures on behalf of selfish users. The model is general, and takes into consideration costs reflecting network latency and user preference profiles, the inherent directionality in overlay maintenance protocols, and connectivity constraints imposed on the system designer. Within this framework the notion of user’s "best response" wiring strategy is formalized as a k-median problem on asymmetric distance and is used to obtain overlay structures in which no node can re-wire to improve the performance it receives from the overlay. Evaluation results presented in this thesis indicate that selfish users can reap substantial performance benefits when connecting to overlay networks composed of non-selfish users. In addition, in overlays that are dominated by selfish users, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naïve wiring strategies. To capitalize on the performance advantages of optimal neighbor selection strategies and the emergent global wirings that result, this thesis presents EGOIST: an SNS-inspired overlay network creation and maintenance routing system. Through an extensive measurement study on the deployed prototype, results presented in this thesis show that EGOIST’s neighbor selection primitives outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, these results demonstrate that EGOIST is competitive with an optimal but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overheads. This thesis also studies selfish neighbor selection strategies for swarming applications. The main focus is on n-way broadcast applications where each of n overlay user wants to push its own distinct file to all other destinations as well as download their respective data files. Results presented in this thesis demonstrate that the performance of our swarming protocol for n-way broadcast on top of overlays of selfish users is far superior than the performance on top of existing overlays. In the context of service provisioning, this thesis examines the use of distributed approaches that enable a provider to determine the number and location of servers for optimal delivery of content or services to its selfish end-users. To leverage recent advances in virtualization technologies, this thesis develops and evaluates a distributed protocol to migrate servers based on end-users demand and only on local topological knowledge. Results under a range of network topologies and workloads suggest that the performance of the distributed deployment is comparable to that of the optimal but unscalable centralized deployment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Memories in Adaptive Resonance Theory (ART) networks are based on matched patterns that focus attention on those portions of bottom-up inputs that match active top-down expectations. While this learning strategy has proved successful for both brain models and applications, computational examples show that attention to early critical features may later distort memory representations during online fast learning. For supervised learning, biased ARTMAP (bARTMAP) solves the problem of over-emphasis on early critical features by directing attention away from previously attended features after the system makes a predictive error. Small-scale, hand-computed analog and binary examples illustrate key model dynamics. Twodimensional simulation examples demonstrate the evolution of bARTMAP memories as they are learned online. Benchmark simulations show that featural biasing also improves performance on large-scale examples. One example, which predicts movie genres and is based, in part, on the Netflix Prize database, was developed for this project. Both first principles and consistent performance improvements on all simulation studies suggest that featural biasing should be incorporated by default in all ARTMAP systems. Benchmark datasets and bARTMAP code are available from the CNS Technology Lab Website: http://techlab.bu.edu/bART/.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Imprisonment is the most severe penalty utilised by the criminal courts in Ireland. In recent decades the prison population has grown significantly despite expressions both official and public to reduce the use of the sanction. Two other sanctions are available to the Irish sentencer which may be used as a direct and comparable sentence in lieu of a term of imprisonment namely, the community service order and the suspended sentence. The community service order remains under-utilised as an alternative to the custodial sentence. The suspended sentence is used quite liberally but its function may be more closely related to the aim of deterrence rather than avoiding the use of the custodial sentence. Thus the aim of decarceration may not be optimal in practice when either sanction is utilised. The decarcerative effect of either sanction is largely dependent upon the specific purpose which judges invest in the sanction. Judges may also be inhibited in the use of either sanction if they lack confidence that the sentence will be appropriately monitored and executed. The purpose of this thesis is to examine the role of the community service order and the suspended sentence in Irish sentencing practice. Although community service and the suspended sentence present primarily as alternatives to the custodial sentence, the manner in which the judges utilise or fail to utilise the sanctions may differ significantly from this primary manifestation. Therefore the study proceeds to examine the judges' cognitions and expectations of both sanctions to explore their underlying purposes and to reveal the manner in which the judges use the sanctions in practice. To access this previously undisclosed information a number of methodologies were deployed. An extensive literature review was conducted to delineate the purpose and functionality of both sanctions. Quantitative data was gathered by way of sampling for the suspended sentence and the part-suspended sentence where deficiencies were apparent to show the actual frequency in use of that sanction. Qualitative methodologies were used by way of focus groups and semi-structured interviews of judges at all jurisdictional levels to elucidate the purposes of both sanctions. These methods allowed a deeper investigation of the factors which may promote or inhibit such usage. The relative under-utilisation of the community service order as an alternative to the custodial sentence may in part be explained by a reluctance by some judges to equate it with a real custodial sentence. For most judges who use the sanction, particularly at summary level, community service serves a decarcerative function. The suspended sentence continues to be used extensively. It operates partly as a decarcerative penalty but the purpose of deterrence may in practice overtake its theoretical purpose namely the avoidance of custody. Despite ongoing criticism of executive agencies such as the Probation Service and the Prosecution in the supervision of such penalties both sanctions continue to be used. Engagement between the Criminal Justice actors may facilitate better outcomes in the use of either sanction. The purposes for which both sanctions are deployed find their meaning essentially in the practices of the judges themselves as opposed to any statutory or theoretical claims upon their use or purpose.