877 resultados para default hypothesis
Resumo:
The need to make default assumptions is frequently encountered in reasoning about incompletely specified worlds. Inferences sanctioned by default are best viewed as beliefs which may well be modified or rejected by subsequent observations. It is this property which leads to the non-monotonicity of any logic of defaults. In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occuring defaults. For this class we develop a complete proof theory and show how to interface it with a top down resolution theorem prover. Finally, we provide criteria under which the revision of derived beliefs must be effected.
Resumo:
The space-time cross-correlation function C-T(r, tau) of local temperature fluctuations in turbulent Rayleigh-Benard convection is obtained from simultaneous two-point time series measurements. The obtained C-T(r, tau) is found to have the scaling form C-T(r(E), 0) with r(E)=[(r-U tau)(2)+ V-2 tau(2)](1/2), where U and V are two characteristic velocities associated with the mean and rms velocities of the flow. The experiment verifies the theory and demonstrates its applications to a class of turbulent flows in which the requirement of Taylor's frozen flow hypothesis is not met.
Resumo:
In "high nitrate, low chlorophyll" (HNLC) ocean regions, iron has been typically regarded as the limiting factor for phytoplankton production. This "iron hypothesis" needs to be tested in various oceanic environments to understand the role of iron in marine biological and biogeochemical processes. In this paper, three in vitro iron enrichment experiments were performed in Prydz Bay and at the Polar Front north of the Ross Sea, to study the role of iron on phytoplankton production. At the Polar Front of Ross Sea, iron addition significantly (P < 0.05, Student's t-test) stimulated phytoplankton growth. In Prydz Bay, however, both the iron treatments and the controls showed rapid phytoplankton growth, and no significant effect (P > 0.05, Student's t-test) as a consequence of iron addition was observed. These results confirmed the limiting role of iron in the Ross Sea and indicated that iron was not the primary factor limiting phytoplankton growth in Prydz Bay. Because the light environment for phytoplankton was enhanced in experimental bottles, light was assumed to be responsible for the rapid growth of phytoplankton in all treatments and to be the limiting factor controlling field phytoplankton growth in Prydz Bay. During the incubation experiments, nutrient consumption ratios also changed with the physiological status and the growth phases of phytoplankton cells. When phytoplankton growth was stimulated by iron addition, N was the first and Si was the last nutrient which absorption enhanced. The Si/N and Si/P consumption ratios of phytoplankton in the stationary and decay phases were significantly higher than those of rapidly growing phytoplankton. These findings were helpful for studies of the marine ecosystem and biogeochemistry in Prydz Bay, and were also valuable for biogeochemical studies of carbon and nutrients in various marine environments.
Resumo:
Seismic exploration is the main tools of exploration for petroleum. as the society needs more petroleum and the level of exploration is going up, the exploration in the area of complex geology construction is the main task in oil industry, so the seismic prestack depth migration appeared, it has good ability for complex construction imaging. Its result depends on the velocity model strongly. So for seismic prestack depth migration has become the main research area. In this thesis the difference in seismic prestack depth migration between our country and the abroad has been analyzed in system. the tomographical method with no layer velocity model, the residual curve velocity analysical method based on velocity model and the deleting method in pre-processing have been developed. In the thesis, the tomographysical method in velocity analysis is been analyzed at first. It characterized with perfection in theory and diffculity in application. This method use the picked first arrivial, compare the difference between the picked first arrival and the calculated arrival in theory velocity model, and then anti-projected the difference along the ray path to get the new velocity model. This method only has the hypothesis of high frequency, no other hypothesis. So it is very effective and has high efficiency. But this method has default still. The picking of first arrival is difficult in the prestack data. The reasons are the ratio of signal to noise is very low and many other event cross each other in prestack data. These phenomenon appear strongly in the complex geology construction area. Based on these a new tomophysical methos in velocity analysis with no layer velocity model is been developed. The aim is to solve the picking problem. It do not need picking the event time contiunely. You can picking in random depending on the reliability. This methos not only need the pick time as the routine tomographysical mehtod, but also the slope of event. In this methos we use the high slope analysis method to improve the precision of picking. In addition we also make research on the residual curve velocity analysis and find that its application is not good and the efficiency is low. The reasons is that the hypothesis is rigid and it is a local optimizing method, it can solve seismic velocity problem in the area with laterical strong velocity variation. A new method is developed to improve the precision of velocity model building . So far the pattern of seismic prestack depth migration is the same as it aborad. Before the work of velocity building the original seismic data must been corrected on a datum plane, and then to make the prestack depth migration work. As we know the successful example is in Mexico bay. It characterized with the simple surface layer construction, the pre-precessing is very simple and its precision is very high. But in our country the main seismic work is in land, the surface layer is very complex, in some area the error of pre-precessing is big, it affect the velocity building. So based on this a new method is developed to delete the per-precessing error and improve the precision of velocity model building. Our main work is, (1) developing a effective tomographical velocity building method with no layer velocity model. (2) a new high resolution slope analysis method is developed. (3) developing a global optimized residual curve velocity buliding method based on velocity model. (4) a effective method of deleting the pre-precessing error is developing. All the method as listed above has been ceritified by the theorical calculation and the actual seismic data.
Resumo:
In this paper, we bound the generalization error of a class of Radial Basis Function networks, for certain well defined function learning tasks, in terms of the number of parameters and number of examples. We show that the total generalization error is partly due to the insufficient representational capacity of the network (because of its finite size) and partly due to insufficient information about the target function (because of finite number of samples). We make several observations about generalization error which are valid irrespective of the approximation scheme. Our result also sheds light on ways to choose an appropriate network architecture for a particular problem.
Resumo:
Este trabalho utilizou técnica de simulação computacional para analisar a movimentação vertical do herbicida hexazinone, considerando dados médios default de literatura e dados calculados para suas meia-vida no solo (t½) e coeficiente de adsorção ao carbono orgânico do solo (Koc) em Latossolo Vermelho Distrófico (LVd) do Córrego do Espraiado, Ribeirão Preto-SP. Informações sobre o solo avaliado, climáticas e os dados da cultura de cana-deaçúcar foram utilizadas no simulador CMLS-94. O cenário base de simulação considerou um período de um ano e quatro meses, apresentando data de corte da cultura no mês de agosto e a aplicação de hexazinone, um mês após o corte, na dose de 0,40 kg ha-1, a qual é encontrada em produto comercial utilizado no local. As seguintes profundidades e quantidades finais foram obtidas ao final do período simulado: a) cenário com dado ?default?: 1,89 m e 0,12 kg ha-1; b) cenário com dado local calculado: 2,78 m e 0,18 kg ha-1. Observou-se uma maior movimentação do produto para o cenário simulado com dados locais, principalmente a partir do 63o dia após a aplicação do produto, embora as concentrações tenham se mantido próximas, em ambos cenários. As profundidades alcançadas não comprometem o lençol em sua zona saturada (40m).
Resumo:
This thesis describes some aspects of a computer system for doing medical diagnosis in the specialized field of kidney disease. Because such a system faces the spectre of combinatorial explosion, this discussion concentrates on heuristics which control the number of concurrent hypotheses and efficient "compiled" representations of medical knowledge. In particular, the differential diagnosis of hematuria (blood in the urine) is discussed in detail. A protocol of a simulated doctor/patient interaction is presented and analyzed to determine the crucial structures and processes involved in the diagnosis procedure. The data structure proposed for representing medical information revolves around elementary hypotheses which are activated when certain disposing of findings, activating hypotheses, evaluating hypotheses locally and combining hypotheses globally is examined for its heuristic implications. The thesis attempts to fit the problem of medical diagnosis into the framework of other Artifcial Intelligence problems and paradigms and in particular explores the notions of pure search vs. heuristic methods, linearity and interaction, local vs. global knowledge and the structure of hypotheses within the world of kidney disease.
Resumo:
King R. D., Whelan, K. E., Jones, F. M., Reiser, P. G. K., Bryant, C. H., Muggleton, S., Kell, D. B. and Oliver, S. G. (2004) Functional genomic hypothesis generation and experimentation by a robot scientist. Nature 427 (6971) p247-252
Resumo:
The paper consists of series of suggestions and historical references on the basis of which it would become possible to think and practice „spectator pedagogy” in performing arts. Contemporary performance practices can claim for new kind of political relevance by focusing on the way spectator´s corporeal experience changes during and through theatrical situation. Naive body produced by a performance is also most susceptible for thoroughgoing political and ecological change. This is the first outline by its author on this topic.
Resumo:
The default ARTMAP algorithm and its parameter values specified here define a ready-to-use general-purpose neural network system for supervised learning and recognition.
Resumo:
Default ARTMAP combines winner-take-all category node activation during training , distributed activation during testing, and a set of default parameter values that define a ready-to-use, general-purpose neural network system for supervised learning and recognition. Winner-take-all ARTMAP learning is designed so that each input would make a correct prediction if re-presented immediately after its training presentation, passing the "next-input test." Distributed activation has been shown to improve test set prediction on many examples, but an input that made a correct winner-take-all prediction during training could make a different prediction with distributed activation. Default ARTMAP 2 introduces a distributed next-input test during training. On a number of benchmarks, this additional feature of the default system increases accuracy without significantly decreasing code compression. This paper includes a self-contained default ARTMAP 2 algorithm for implementation.
Resumo:
BP (89-A-1204); Defense Advanced Research Projects Agency (90-0083); National Science Foundation (IRI-90-00530); Air Force Office of Scientific Research (90-0175, 90-0128); Army Research Office (DAAL-03-88-K0088)
Resumo:
Does environmental regulation impair international competitiveness of pollution-intensive industries to the extent that they relocate to countries with less stringent regulation, turning those countries into "pollution havens"? We test this hypothesis using panel data on outward foreign direct investment (FDI) flows of various industries in the German manufacturing sector and account for several econometric issues that have been ignored in previous studies. Most importantly, we demonstrate that externalities associated with FDI agglomeration can bias estimates away from finding a pollution haven effect if omitted from the analysis. We include the stock of inward FDI as a proxy for agglomeration and employ a GMM estimator to control for endogenous time-varying determinants of FDI flows. Furthermore, we propose a difference estimator based on the least polluting industry to break the possible correlation between environmental regulatory stringency and unobservable attributes of FDI recipients in the cross-section. When accounting for these issues we find robust evidence of a pollution haven effect for the chemical industry. © 2008 Springer Science+Business Media B.V.