7 resultados para Default penalties

em Chinese Academy of Sciences Institutional Repositories Grid Portal


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The need to make default assumptions is frequently encountered in reasoning about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non-monotonicity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occuring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transmission properties of data amplitude modulation (AM) and frequency modulation (FM) in radio-over-fiber (RoF) system are studied numerically. The influences of fiber dispersion and nonlinearity on different microwave modulation schemes, including double side band (DSB), single side band (SSB) and optical carrier suppression (OCS), are investigated and compared. The power penalties at the base station (BS) and the eye opening penalties of the recovered data at the end users are both calculated and analyzed. Numerical simulation results reveal that the power penalty of FM can be drastically decreased due to the larger modulation depth it can achieve than that of AM. The local spectrum broadening around subcarrier microwave frequency of AM due to fiber nonlinearity can also be eliminated with FM. It is demonstrated for the first time that the eye openings of the FM recovered data can be controlled by its modulation depths and the coding formats. Negative voltage encoding format was used to further decrease the RF frequency thus increase the fluctuation period considering their inverse relationship.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wavelength tunable electro-absorption modulated distributed Bragg reflector lasers (TEMLs) are promising light source in dense wavelength division multiplexing (DWDM) optical fiber communication system due to high modulation speed, small chirp, low drive voltage, compactness and fast wavelength tuning ability. Thus, increased the transmission capacity, the functionality and the flexibility are provided. Materials with bandgap difference as large as 250nm have been integrated on the same wafer by a combined technique of selective area growth (SAG) and quantum well intermixing (QWI), which supplies a flexible and controllable platform for the need of photonic integrated circuits (PIC). A TEML has been fabricated by this technique for the first time. The component has superior characteristics as following: threshold current of 37mA, output power of 3.5mW at 100mA injection and 0V modulator bias voltage, extinction ratio of more than 20 dB with modulator reverse voltage from 0V to 2V when coupled into a single mode fiber, and wavelength tuning range of 4.4nm covering 6 100-GHz WDM channels. A clearly open eye diagram is observed when the integrated EAM is driven with a 10-Gb/s electrical NRZ signal. A good transmission characteristic is exhibited with power penalties less than 2.2 dB at a bit error ratio (BER) of 10(-10) after 44.4 km standard fiber transmission.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transmission Volume Phase Holographic Grating (VPHG) is adopted as spectral element in the real-time Optical Channel Performance Monitor (OCPM), which is in dire need in the Dense Wavelength -division-multiplexing(DATDM) system. And the tolerance of incident angle, which can be fully determined by two angles: 6 and (p, is finally inferred in this paper. Commonly, the default setting is that the incident plane is perpendicular to the fringes when the incident angle is mentioned. Now the situation out of the vertical is discussed. By combining the theoretic analysis of VPHG with its use in OCPM and changing 6 and (0 precisely in the computation and experiment, the two physical quantities which can fully specify the performance of VPHG the diffraction efficiency and the resolution, are analyzed. The results show that the diffraction efficiency varies greatly with the change of 6 or (p. But from the view of the whole C-band, only the peak diffraction efficiency drifts to another wavelength. As for the resolution, it deteriorates more rapidly than diffraction efficiency with the change of (p, while more slowly with the change of theta. Only if \phi\less than or equal to+/-1degrees and alpha(B) -0.5 less than or equal to theta less than or equal to alpha(B) + 0.5, the performance of the VPHG would be good enough to be used in OCPM system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The simulating wave nearshore (SWAN) wave model has been widely used in coastal areas, lakes and estuaries. However, we found a poor agreement between modeling results and measurements in analyzing the chosen four typical cases when we used the default parameters of the source function formulas of the SWAN to make wave simulation for the Bohai Sea. Also, it was found that at the same wind process the simulated results of two wind generation expressions (Komen, Janssen) demonstrated a large difference. Further study showed that the proportionality coefficient alpha in linear growth term of wave growth source function plays an unperceived role in the process of wave development. Based on experiments and analysis, we thought that the coefficient alpha should change rather than be a constant. Therefore, the coefficient alpha changing with the variation of friction velocity U (*) was introduced into the linear growth term of wave growth source function. Four weather processes were adopted to validate the improvement in the linear growth term. The results from the improved coefficient alpha agree much better with the measurements than those from the default constant coefficient alpha. Furthermore, the large differences of results between Komen wind generation expression and Janssen wind generation expression were eliminated. We also experimented with the four weather processes to test the new white-capping mechanisms based on the cumulative steepness method. It was found that the parameters of the new white-capping mechanisms are not suitable for the Bohai Sea, but Alkyon's white-capping mechanisms can be applicable to the Bohai Sea after amendments, demonstrating that this improvement of parameter alpha can improve the simulated results of the Bohai Sea.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seismic exploration is the main tools of exploration for petroleum. as the society needs more petroleum and the level of exploration is going up, the exploration in the area of complex geology construction is the main task in oil industry, so the seismic prestack depth migration appeared, it has good ability for complex construction imaging. Its result depends on the velocity model strongly. So for seismic prestack depth migration has become the main research area. In this thesis the difference in seismic prestack depth migration between our country and the abroad has been analyzed in system. the tomographical method with no layer velocity model, the residual curve velocity analysical method based on velocity model and the deleting method in pre-processing have been developed. In the thesis, the tomographysical method in velocity analysis is been analyzed at first. It characterized with perfection in theory and diffculity in application. This method use the picked first arrivial, compare the difference between the picked first arrival and the calculated arrival in theory velocity model, and then anti-projected the difference along the ray path to get the new velocity model. This method only has the hypothesis of high frequency, no other hypothesis. So it is very effective and has high efficiency. But this method has default still. The picking of first arrival is difficult in the prestack data. The reasons are the ratio of signal to noise is very low and many other event cross each other in prestack data. These phenomenon appear strongly in the complex geology construction area. Based on these a new tomophysical methos in velocity analysis with no layer velocity model is been developed. The aim is to solve the picking problem. It do not need picking the event time contiunely. You can picking in random depending on the reliability. This methos not only need the pick time as the routine tomographysical mehtod, but also the slope of event. In this methos we use the high slope analysis method to improve the precision of picking. In addition we also make research on the residual curve velocity analysis and find that its application is not good and the efficiency is low. The reasons is that the hypothesis is rigid and it is a local optimizing method, it can solve seismic velocity problem in the area with laterical strong velocity variation. A new method is developed to improve the precision of velocity model building . So far the pattern of seismic prestack depth migration is the same as it aborad. Before the work of velocity building the original seismic data must been corrected on a datum plane, and then to make the prestack depth migration work. As we know the successful example is in Mexico bay. It characterized with the simple surface layer construction, the pre-precessing is very simple and its precision is very high. But in our country the main seismic work is in land, the surface layer is very complex, in some area the error of pre-precessing is big, it affect the velocity building. So based on this a new method is developed to delete the per-precessing error and improve the precision of velocity model building. Our main work is, (1) developing a effective tomographical velocity building method with no layer velocity model. (2) a new high resolution slope analysis method is developed. (3) developing a global optimized residual curve velocity buliding method based on velocity model. (4) a effective method of deleting the pre-precessing error is developing. All the method as listed above has been ceritified by the theorical calculation and the actual seismic data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Eight experiments tested how object array structure and learning location influenced the establishing and utilization of self-to-object and object-to-object spatial representations in locomotion and reorientation. In Experiment 1 to 4, participants learned either at the periphery of or amidst regular or irregular object array, and then pointed to objects while blindfolded in three conditions: before turning (baseline), after rotating 240 degrees (updating), and after disorientation (disorientation). In Experiment 5 to 8, participants received instruction to keep track of self-to-object or object-to-object spatial representations before rotation. In each condition, the configuration error, which means the standard deviation of the means per target object of the signed pointing errors, was calculated as the index of the fidelity of representation used in each condition. Results indicate that participants form both self-to-object and object-to-object spatial representations after learning an object-array. Object-array structure influences the selection of representation during updating. By default, object-to-object spatial representation is updated when people learned the regular object-array structure, and self-to-object spatial representation is updated when people learned the irregular object array. But people could also update the other representation when they are required to do so. The fidelity of representations will confine this kind of “switch”. People could only “switch” from a low fidelity representation to a high fidelity representation or between two representations of similar fidelity. They couldn’t “switch” from a high fidelity representation to a low fidelity representation. Leaning location might influence the fidelity of representations. When people learned at the periphery of object array, they could acquire both self-to-object and object-to-object spatial representations of high fidelity. But when people learned amidst the object array, they could only acquire self-to-object spatial representation of high fidelity, and the fidelity of object-to-object spatial representation was low.