914 resultados para Error Probability
Resumo:
Pós-graduação em Agronomia (Produção Vegetal) - FCAV
Resumo:
The aim of this paper is to compare 18 reference evapotranspiration models to the standard Penman-Monteith model in the Jaboticabal, Sao Paulo, region for the following time scales: daily, 5-day, 15-day and seasonal. A total of 5 years of daily meteorological data was used for the following analyses: accuracy (mean absolute percentage error, Mape), precision (R-2) and tendency (bias) (systematic error, SE). The results were also compared at the 95% probability level with Tukey's test. The Priestley-Taylor (1972) method was the most accurate for all time scales, the Tanner-Pelton (1960) method was the most accurate in the winter, and the Thornthwaite (1948) method was the most accurate of the methods that only used temperature data in the equations.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Introduction: The aim was to confirm that PSF (probability of stone formation) changed appropriately following medical therapy on recurrent stone formers.Materials and Methods: Data were collected on 26 Brazilian stone-formers. A baseline 24-hour urine collection was performed prior to treatment. Details of the medical treatment initiated for stone-disease were recorded. A PSF calculation was performed on the 24 hour urine sample using the 7 urinary parameters required: voided volume, oxalate, calcium, urate, pH, citrate and magnesium. A repeat 24-hour urine sample was performed for PSF calculation after treatment. Comparison was made between the PSF scores before and during treatment.Results: At baseline, 20 of the 26 patients (77%) had a high PSF score (> 0.5). Of the 26 patients, 17 (65%) showed an overall reduction in their PSF profiles with a medical treatment regimen. Eleven patients (42%) changed from a high risk (PSF > 0.5) to a low risk (PSF < 0.5) and 6 patients reduced their risk score but did not change risk category. Six (23%) patients remained in a high risk category (> 0.5) during both assessments.Conclusions: The PSF score reduced following medical treatment in the majority of patients in this cohort.
Resumo:
Watersheds are considered important study units when it comes to environmental planning, with regard to the optimal use of water resources. Water scarcity is predicted and feared by many societies, and proves to be an increasingly tangible problem nowadays. Still from the perspective of extreme events, this dissertation considers the study of flood waves in the sub-basin of the stream Claro, which belongs to the Corumbataí watershed. - SP, since thay can also have devastating effects for the population, A Decision Support System for Flood Routing Analysis in Complex Basins, ABC 6 software was applied in order to obtain hydrographs and peak flows in the sub-basin of the stream Claro, for return periods of 10 and 100 years, aiming to comprise events of different magnitudes. The model Soil Conservation Service (SCS) and the triangular SCS hydrograph were adopted for the simulations. Simultaneously, the Kokei Uehara method was applied for the obtainment of peak flow values under the same conditions, seeking to compare results. Data collection was performed using geoprocessing tools. For data entry in ABC 6, the fragmentation of sub-basin of the stream Claro was necessary, which generated 7 small watersheds, in order to fulfill a software demand, as the maximum drainage area it accepts is 50km² for each watershed analyzed. For RT = 10 and 100 years, respectively, the results of peak flow with use of ABC 6 were 46.10 and 95.45 m³/s, while for Kokei Uehara method, the results were 47.17 and 65.26 m³/s. The adoption of a single value of discretization time for all watersheds was indicated as limitation of ABC 6, which interfered in the final results. Kokei method Uehara considered the sub-basin of the stream Claro as a whole, which reduced the error accumulation probability
Resumo:
When assessing food intake patterns in groups of individuals, a major problem is finding usual intake distribution. This study aimed at searching for a probability distribution to estimate the usual intake of nutrients using data from a cross-sectional investigation on nutrition students from a public university in São Paulo state, Brazil. Data on 119 women aged 19 to 30 years old were used. All women answered a questionnaire about their lifestyle, diet and demographics. Food intake was evaluated from a non-consecutive three-day 24-hour food record. Different probability distributions were tested for vitamins C and E, panthotenic acid, folate, zinc, copper and calcium where data normalization was not possible. Empirical comparisons were performed, and inadequacy prevalence was calculated by comparing with the NRC method. It was concluded that if a more realistic distribution for usual intake is found, results can be more accurate as compared to those achieved by other methods.
Resumo:
Pós-graduação em Bases Gerais da Cirurgia - FMB
Resumo:
You published recently (Nature 374, 587; 1995) a report headed "Error re-opens 'scientific' whaling debate". The error in question, however, relates to commercial whaling, not to scientific whaling. Although Norway cites science as a basis for the way in which it sets its own quota. scientific whaling means something quite different. namely killing whales for research purposes. Any member of the International Whaling Commission (IWC) has the right to conduct a research catch under the International Convention for the Regulation of Whaling. 1946. The IWC has reviewed new research or scientific whaling programs for Japan and Norway since the IWC moratorium on commercial whaling began in 1986. In every case, the IWC advised Japan and Norway to reconsider the lethal aspects of their research programs. Last year, however, Norway started a commercial hunt in combination with its scientific catch, despite the IWC moratorium.
Resumo:
In this paper, a cross-layer solution for packet size optimization in wireless sensor networks (WSN) is introduced such that the effects of multi-hop routing, the broadcast nature of the physical wireless channel, and the effects of error control techniques are captured. A key result of this paper is that contrary to the conventional wireless networks, in wireless sensor networks, longer packets reduce the collision probability. Consequently, an optimization solution is formalized by using three different objective functions, i.e., packet throughput, energy consumption, and resource utilization. Furthermore, the effects of end-to-end latency and reliability constraints are investigated that may be required by a particular application. As a result, a generic, cross-layer optimization framework is developed to determine the optimal packet size in WSN. This framework is further extended to determine the optimal packet size in underwater and underground sensor networks. From this framework, the optimal packet sizes under various network parameters are determined.
Resumo:
Maximum-likelihood decoding is often the optimal decoding rule one can use, but it is very costly to implement in a general setting. Much effort has therefore been dedicated to find efficient decoding algorithms that either achieve or approximate the error-correcting performance of the maximum-likelihood decoder. This dissertation examines two approaches to this problem. In 2003 Feldman and his collaborators defined the linear programming decoder, which operates by solving a linear programming relaxation of the maximum-likelihood decoding problem. As with many modern decoding algorithms, is possible for the linear programming decoder to output vectors that do not correspond to codewords; such vectors are known as pseudocodewords. In this work, we completely classify the set of linear programming pseudocodewords for the family of cycle codes. For the case of the binary symmetric channel, another approximation of maximum-likelihood decoding was introduced by Omura in 1972. This decoder employs an iterative algorithm whose behavior closely mimics that of the simplex algorithm. We generalize Omura's decoder to operate on any binary-input memoryless channel, thus obtaining a soft-decision decoding algorithm. Further, we prove that the probability of the generalized algorithm returning the maximum-likelihood codeword approaches 1 as the number of iterations goes to infinity.
Resumo:
The enzymatically catalyzed template-directed extension of ssDNA/primer complex is an impor-tant reaction of extraordinary complexity. The DNA polymerase does not merely facilitate the insertion of dNMP, but it also performs rapid screening of substrates to ensure a high degree of fidelity. Several kinetic studies have determined rate constants and equilibrium constants for the elementary steps that make up the overall pathway. The information is used to develop a macro-scopic kinetic model, using an approach described by Ninio [Ninio J., 1987. Alternative to the steady-state method: derivation of reaction rates from first-passage times and pathway probabili-ties. Proc. Natl. Acad. Sci. U.S.A. 84, 663–667]. The principle idea of the Ninio approach is to track a single template/primer complex over time and to identify the expected behavior. The average time to insert a single nucleotide is a weighted sum of several terms, in-cluding the actual time to insert a nucleotide plus delays due to polymerase detachment from ei-ther the ternary (template-primer-polymerase) or quaternary (+nucleotide) complexes and time delays associated with the identification and ultimate rejection of an incorrect nucleotide from the binding site. The passage times of all events and their probability of occurrence are ex-pressed in terms of the rate constants of the elementary steps of the reaction pathway. The model accounts for variations in the average insertion time with different nucleotides as well as the in-fluence of G+C content of the sequence in the vicinity of the insertion site. Furthermore the model provides estimates of error frequencies. If nucleotide extension is recognized as a compe-tition between successful insertions and time delaying events, it can be described as a binomial process with a probability distribution. The distribution gives the probability to extend a primer/template complex with a certain number of base pairs and in general it maps annealed complexes into extension products.
Resumo:
The associationist account for early word learning is based on the co-occurrence between referents and words. Here we introduce a noisy cross-situational learning scenario in which the referent of the uttered word is eliminated from the context with probability gamma, thus modeling the noise produced by out-of-context words. We examine the performance of a simple associative learning algorithm and find a critical value of the noise parameter gamma(c) above which learning is impossible. We use finite-size scaling to show that the sharpness of the transition persists across a region of order tau(-1/2) about gamma(c), where tau is the number of learning trials, as well as to obtain the learning error (scaling function) in the critical region. In addition, we show that the distribution of durations of periods when the learning error is zero is a power law with exponent -3/2 at the critical point. Copyright (C) EPLA, 2012
Resumo:
Assessment of the suitability of anthropogenic landscapes for wildlife species is crucial for setting priorities for biodiversity conservation. This study aimed to analyse the environmental suitability of a highly fragmented region of the Brazilian Atlantic Forest, one of the world's 25 recognized biodiversity hotspots, for forest bird species. Eight forest bird species were selected for the analyses, based on point counts (n = 122) conducted in April-September 2006 and January-March 2009. Six additional variables (landscape diversity, distance from forest and streams, aspect, elevation and slope) were modelled in Maxent for (1) actual and (2) simulated land cover, based on the forest expansion required by existing Brazilian forest legislation. Models were evaluated by bootstrap or jackknife methods and their performance was assessed by AUC, omission error, binomial probability or p value. All predictive models were statistically significant, with high AUC values and low omission errors. A small proportion of the actual landscape (24.41 +/- 6.31%) was suitable for forest bird species. The simulated landscapes lead to an increase of c. 30% in total suitable areas. In average, models predicted a small increase (23.69 +/- 6.95%) in the area of suitable native forest for bird species. Being close to forest increased the environmental suitability of landscapes for all bird species; landscape diversity was also a significant factor for some species. In conclusion, this study demonstrates that species distribution modelling (SDM) successfully predicted bird distribution across a heterogeneous landscape at fine spatial resolution, as all models were biologically relevant and statistically significant. The use of landscape variables as predictors contributed significantly to the results, particularly for species distributions over small extents and at fine scales. This is the first study to evaluate the environmental suitability of the remaining Brazilian Atlantic Forest for bird species in an agricultural landscape, and provides important additional data for regional environmental planning.
Resumo:
This work develops a computational approach for boundary and initial-value problems by using operational matrices, in order to run an evolutive process in a Hilbert space. Besides, upper bounds for errors in the solutions and in their derivatives can be estimated providing accuracy measures.
Resumo:
Estimates of evapotranspiration on a local scale is important information for agricultural and hydrological practices. However, equations to estimate potential evapotranspiration based only on temperature data, which are simple to use, are usually less trustworthy than the Food and Agriculture Organization (FAO)Penman-Monteith standard method. The present work describes two correction procedures for potential evapotranspiration estimates by temperature, making the results more reliable. Initially, the standard FAO-Penman-Monteith method was evaluated with a complete climatologic data set for the period between 2002 and 2006. Then temperature-based estimates by Camargo and Jensen-Haise methods have been adjusted by error autocorrelation evaluated in biweekly and monthly periods. In a second adjustment, simple linear regression was applied. The adjusted equations have been validated with climatic data available for the Year 2001. Both proposed methodologies showed good agreement with the standard method indicating that the methodology can be used for local potential evapotranspiration estimates.