946 resultados para incremental computation
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
OBJECTIVE: To estimate the spatial intensity of urban violence events using wavelet-based methods and emergency room data. METHODS: Information on victims attended at the emergency room of a public hospital in the city of São Paulo, Southeastern Brazil, from January 1, 2002 to January 11, 2003 were obtained from hospital records. The spatial distribution of 3,540 events was recorded and a uniform random procedure was used to allocate records with incomplete addresses. Point processes and wavelet analysis technique were used to estimate the spatial intensity, defined as the expected number of events by unit area. RESULTS: Of all georeferenced points, 59% were accidents and 40% were assaults. There is a non-homogeneous spatial distribution of the events with high concentration in two districts and three large avenues in the southern area of the city of São Paulo. CONCLUSIONS: Hospital records combined with methodological tools to estimate intensity of events are useful to study urban violence. The wavelet analysis is useful in the computation of the expected number of events and their respective confidence bands for any sub-region and, consequently, in the specification of risk estimates that could be used in decision-making processes for public policies.
Resumo:
Purpose: The aim of this study was to verify the influence of aerobic fitness (VO(2)max) on internal training loads, as measured by the session rating of perceived exertion (session-RPE) method. Methods: Nine male professional outfield futsal players were monitored for 4 wk of the in-season period with regards to the weekly accumulated session-RPE, while participating in the same training sessions. Single-session-RPE was obtained from the product of a 10-point RPE scale and the duration of exercise. Maximal oxygen consumption was determined during an incremental treadmill test. Results: The average training load throughout the 4 wk period varied between 2,876 and 5,035 arbitrary units. Technical-tactical sessions were the predominant source of loading. There was a significant correlation between VO(2)max (59.6 +/- 2.5 mL.kg(-1).min(-1)) and overall training load accumulated over the total period (r = -0.75). Conclusions: The VO(2)max plays a key role in determining the magnitude of an individual's perceived exertion during futsal training sessions.
Resumo:
This study aimed to compare maximal fat oxidation rate parameters between moderate-and low-performance runners. Eighteen runners performed an incremental treadmill test to estimate individual maximal fat oxidation rate (Fat(max)) based on gases measures and a 10,000-m run on a track. The subjects were then divided into a low and moderate performance group using two different criteria: 10,000-m time and VO(2)max values. When groups were divided using 10,000-m time, there was no significant difference in Fat(max) (0.41 +/- 0.16 and 0.27 +/- 0.12 g.min(-1), p = 0.07) or in the exercise intensity that elicited Fat(max) (59.9 +/- 16.5 and 68.7 +/- 10.3 % (V) over dotO(2max), p = 0.23) between the moderate and low performance groups, respectively (p > 0.05). When groups were divided using VO(2max) values, Fat(max) was significantly lower in the low VO(2max) group than in the high VO(2max) group (0.29 +/- 0.10 and 0.47 +/- 0.17 g.min(-1), respectively, p < 0.05) but the intensity that elicited Fat(max) did not differ between groups (64.4 +/- 14.9 and 61.6 +/- 15.4 % VO(2max)). Fat(max) or % VO(2max) that elicited Fat(max) was not associated with 10,000 m time. The only variable associated with 10,000-m running performance was % VO(2max) used during the run (p < 0.01). In conclusion, the criteria used for the division of groups according to training status might influence the identification of differences in Fat(max) or in the intensity that elicits Fat(max).
Resumo:
Since the first experimental evidences of active conductances in dendrites, most neurons have been shown to exhibit dendritic excitability through the expression of a variety of voltage-gated ion channels. However, despite experimental and theoretical efforts undertaken in the past decades, the role of this excitability for some kind of dendritic computation has remained elusive. Here we show that, owing to very general properties of excitable media, the average output of a model of an active dendritic tree is a highly non-linear function of its afferent rate, attaining extremely large dynamic ranges (above 50 dB). Moreover, the model yields double-sigmoid response functions as experimentally observed in retinal ganglion cells. We claim that enhancement of dynamic range is the primary functional role of active dendritic conductances. We predict that neurons with larger dendritic trees should have larger dynamic range and that blocking of active conductances should lead to a decrease in dynamic range.
Resumo:
Background: Analyses of population structure and breed diversity have provided insight into the origin and evolution of cattle. Previously, these studies have used a low density of microsatellite markers, however, with the large number of single nucleotide polymorphism markers that are now available, it is possible to perform genome wide population genetic analyses in cattle. In this study, we used a high-density panel of SNP markers to examine population structure and diversity among eight cattle breeds sampled from Bos indicus and Bos taurus. Results: Two thousand six hundred and forty one single nucleotide polymorphisms ( SNPs) spanning all of the bovine autosomal genome were genotyped in Angus, Brahman, Charolais, Dutch Black and White Dairy, Holstein, Japanese Black, Limousin and Nelore cattle. Population structure was examined using the linkage model in the program STRUCTURE and Fst estimates were used to construct a neighbor-joining tree to represent the phylogenetic relationship among these breeds. Conclusion: The whole-genome SNP panel identified several levels of population substructure in the set of examined cattle breeds. The greatest level of genetic differentiation was detected between the Bos taurus and Bos indicus breeds. When the Bos indicus breeds were excluded from the analysis, genetic differences among beef versus dairy and European versus Asian breeds were detected among the Bos taurus breeds. Exploration of the number of SNP loci required to differentiate between breeds showed that for 100 SNP loci, individuals could only be correctly clustered into breeds 50% of the time, thus a large number of SNP markers are required to replace the 30 microsatellite markers that are currently commonly used in genetic diversity studies.
Resumo:
Background: In areas with limited structure in place for microscopy diagnosis, rapid diagnostic tests (RDT) have been demonstrated to be effective. Method: The cost-effectiveness of the Optimal (R) and thick smear microscopy was estimated and compared. Data were collected on remote areas of 12 municipalities in the Brazilian Amazon. Data sources included the National Malaria Control Programme of the Ministry of Health, the National Healthcare System reimbursement table, hospitalization records, primary data collected from the municipalities, and scientific literature. The perspective was that of the Brazilian public health system, the analytical horizon was from the start of fever until the diagnostic results provided to patient and the temporal reference was that of year 2006. The results were expressed in costs per adequately diagnosed cases in 2006 U. S. dollars. Sensitivity analysis was performed considering key model parameters. Results: In the case base scenario, considering 92% and 95% sensitivity for thick smear microscopy to Plasmodium falciparum and Plasmodium vivax, respectively, and 100% specificity for both species, thick smear microscopy is more costly and more effective, with an incremental cost estimated at US$ 549.9 per adequately diagnosed case. In sensitivity analysis, when sensitivity and specificity of microscopy for P. vivax were 0.90 and 0.98, respectively, and when its sensitivity for P. falciparum was 0.83, the RDT was more cost-effective than microscopy. Conclusion: Microscopy is more cost-effective than OptiMal (R) in these remote areas if high accuracy of microscopy is maintained in the field. Decision regarding use of rapid tests for diagnosis of malaria in these areas depends on current microscopy accuracy in the field.
Resumo:
Genetic variation provides a basis upon which populations can be genetically improved. Management of animal genetic resources in order to minimize loss of genetic diversity both within and across breeds has recently received attention at different levels, e. g., breed, national and international levels. A major need for sustainable improvement and conservation programs is accurate estimates of population parameters, such as rate of inbreeding and effective population size. A software system (POPREP) is presented that automatically generates a typeset report. Key parameters for population management, such as age structure, generation interval, variance in family size, rate of inbreeding, and effective population size form the core part of this report. The report includes a default text that describes definition, computation and meaning of the various parameters. The report is summarized in two pdf files, named Population Structure and Pedigree Analysis Reports. In addition, results (e. g., individual inbreeding coefficients, rate of inbreeding and effective population size) are stored in comma-separate-values files that are available for further processing. Pedigree data from eight livestock breeds from different species and countries were used to describe the potential of POPREP and to highlight areas for further research.
Resumo:
This work examines the sources of moisture affecting the semi-arid Brazilian Northeast (NEB) during its pre-rainy and rainy season (JFMAM) through a Lagrangian diagnosis method. The FLEXPART model identifies the humidity contributions to the moisture budget over a region through the continuous computation of changes in the specific humidity along back or forward trajectories up to 10 days period. The numerical experiments were done for the period that spans between 2000 and 2004 and results were aggregated on a monthly basis. Results show that besides a minor local recycling component, the vast majority of moisture reaching NEB area is originated in the south Atlantic basin and that the nearby wet Amazon basin bears almost no impact. Moreover, although the maximum precipitation in the ""Poligono das Secas'' region (PS) occurs in March and the maximum precipitation associated with air parcels emanating from the South Atlantic towards PS is observed along January to March, the highest moisture contribution from this oceanic region occurs slightly later (April). A dynamical analysis suggests that the maximum precipitation observed in the PS sector does not coincide with the maximum moisture supply probably due to the combined effect of the Walker and Hadley cells in inhibiting the rising motions over the region in the months following April.
Sensitivity to noise and ergodicity of an assembly line of cellular automata that classifies density
Resumo:
We investigate the sensitivity of the composite cellular automaton of H. Fuks [Phys. Rev. E 55, R2081 (1997)] to noise and assess the density classification performance of the resulting probabilistic cellular automaton (PCA) numerically. We conclude that the composite PCA performs the density classification task reliably only up to very small levels of noise. In particular, it cannot outperform the noisy Gacs-Kurdyumov-Levin automaton, an imperfect classifier, for any level of noise. While the original composite CA is nonergodic, analyses of relaxation times indicate that its noisy version is an ergodic automaton, with the relaxation times decaying algebraically over an extended range of parameters with an exponent very close (possibly equal) to the mean-field value.
Resumo:
We present a scheme for quasiperfect transfer of polariton states from a sender to a spatially separated receiver, both composed of high-quality cavities filled by atomic samples. The sender and the receiver are connected by a nonideal transmission channel -the data bus- modelled by a network of lossy empty cavities. In particular, we analyze the influence of a large class of data-bus topologies on the fidelity and transfer time of the polariton state. Moreover, we also assume dispersive couplings between the polariton fields and the data-bus normal modes in order to achieve a tunneling-like state transfer. Such a tunneling-transfer mechanism, by which the excitation energy of the polariton effectively does not populate the data-bus cavities, is capable of attenuating appreciably the dissipative effects of the data-bus cavities. After deriving a Hamiltonian for the effective coupling between the sender and the receiver, we show that the decay rate of the fidelity is proportional to a cooperativity parameter that weighs the cost of the dissipation rate against the benefit of the effective coupling strength. The increase of the fidelity of the transfer process can be achieved at the expense of longer transfer times. We also show that the dependence of both the fidelity and the transfer time on the network topology is analyzed in detail for distinct regimes of parameters. It follows that the data-bus topology can be explored to control the time of the state-transfer process.
Resumo:
We present a derivation of the Redfield formalism for treating the dissipative dynamics of a time-dependent quantum system coupled to a classical environment. We compare such a formalism with the master equation approach where the environments are treated quantum mechanically. Focusing on a time-dependent spin-1/2 system we demonstrate the equivalence between both approaches by showing that they lead to the same Bloch equations and, as a consequence, to the same characteristic times T(1) and T(2) (associated with the longitudinal and transverse relaxations, respectively). These characteristic times are shown to be related to the operator-sum representation and the equivalent phenomenological-operator approach. Finally, we present a protocol to circumvent the decoherence processes due to the loss of energy (and thus, associated with T(1)). To this end, we simply associate the time dependence of the quantum system to an easily achieved modulated frequency. A possible implementation of the protocol is also proposed in the context of nuclear magnetic resonance.
Resumo:
We propose an alternative fidelity measure (namely, a measure of the degree of similarity) between quantum states and benchmark it against a number of properties of the standard Uhlmann-Jozsa fidelity. This measure is a simple function of the linear entropy and the Hilbert-Schmidt inner product between the given states and is thus, in comparison, not as computationally demanding. It also features several remarkable properties such as being jointly concave and satisfying all of Jozsa's axioms. The trade-off, however, is that it is supermultiplicative and does not behave monotonically under quantum operations. In addition, metrics for the space of density matrices are identified and the joint concavity of the Uhlmann-Jozsa fidelity for qubit states is established.
Resumo:
In this paper, we present an analog of Bell's inequalities violation test for N qubits to be performed in a nuclear magnetic resonance (NMR) quantum computer. This can be used to simulate or predict the results for different Bell's inequality tests, with distinct configurations and a larger number of qubits. To demonstrate our scheme, we implemented a simulation of the violation of the Clauser, Horne, Shimony and Holt (CHSH) inequality using a two-qubit NMR system and compared the results to those of a photon experiment. The experimental results are well described by the quantum mechanics theory and a local realistic hidden variables model (LRHVM) that was specifically developed for NMR. That is why we refer to this experiment as a simulation of Bell's inequality violation. Our result shows explicitly how the two theories can be compatible with each other due to the detection loophole. In the last part of this work, we discuss the possibility of testing some fundamental features of quantum mechanics using NMR with highly polarized spins, where a strong discrepancy between quantum mechanics and hidden variables models can be expected.
Resumo:
The existence of quantum correlation (as revealed by quantum discord), other than entanglement and its role in quantum-information processing (QIP), is a current subject for discussion. In particular, it has been suggested that this nonclassical correlation may provide computational speedup for some quantum algorithms. In this regard, bulk nuclear magnetic resonance (NMR) has been successfully used as a test bench for many QIP implementations, although it has also been continuously criticized for not presenting entanglement in most of the systems used so far. In this paper, we report a theoretical and experimental study on the dynamics of quantum and classical correlations in an NMR quadrupolar system. We present a method for computing the correlations from experimental NMR deviation-density matrices and show that, given the action of the nuclear-spin environment, the relaxation produces a monotonic time decay in the correlations. Although the experimental realizations were performed in a specific quadrupolar system, the main results presented here can be applied to whichever system uses a deviation-density matrix formalism.