974 resultados para math computation
Resumo:
Hydrological models developed for extreme precipitation of PMP type are difficult to calibrate because of the scarcity of available data for these events. This article presents the process and results of calibration for a distributed hydrological model at fine scale developed for the estimation of probable maximal floods in the case of a PMP. This calibration is done on two Swiss catchments for two events of summer storms. The calculation done is concentrated on the estimation of the parameters of the model, divided in two parts. The first is necessary for the computation of flow speeds while the second is required for the determination of the initial and final infiltration capacities for each terrain type. The results, validated with the Nash equation show a good correlation between the simulated and observed flows. We also apply this model on two Romanian catchments, showing the river network and estimated flow.
Resumo:
This study investigated concentrations of quetiapine and norquetiapine in plasma and cerebrospinal fluid (CSF) in 22 schizophrenic patients after 4-week treatment with quetiapine (600 mg/d), which was preceded by a 3-week washout period. Blood and CSF samples were obtained on days 1 and 28, and CSF levels of homovanillic acid (HVA), 5-hydroxyindoleacetic acid (5-HIAA), and 3-methoxy-4-hydroxyphenylglycol (MHPG) concentrations were measured at baseline and after 4 weeks of quetiapine, allowing calculations of differences in HVA (ΔHVA), 5-HIAA (Δ5-HIAA), and MHPG (ΔMHPG) concentrations. Patients were assessed clinically, using the Positive and Negative Syndrome Scale (PANSS) and Clinical Global Impression Scale at baseline and then at weekly intervals. Plasma levels of quetiapine and norquetiapine were 1110 ± 608 and 444 ± 226 ng/mL, and the corresponding CSF levels were 29 ± 18 and 5 ± 2 ng/mL, respectively. After the treatment, the levels of HVA, 5-HIAA, and MHPG were increased by 33%, 35%, and 33%, respectively (P < 0.001). A negative correlation was found between the decrease in PANSS positive subscale scores and CSF ΔHVA (r(rho) = -0.690, P < 0.01), and the decrease in PANSS negative subscale scores both with CSF Δ5-HIAA (r(rho) = -0.619, P = 0.02) and ΔMHPG (r(rho) = -0.484, P = 0.038). Because, unfortunately, schizophrenic patients experience relapses even with the best available treatments, monitoring of CSF drug and metabolite levels might prove to be useful in tailoring individually adjusted treatments.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
Remote sensing spatial, spectral, and temporal resolutions of images, acquired over a reasonably sized image extent, result in imagery that can be processed to represent land cover over large areas with an amount of spatial detail that is very attractive for monitoring, management, and scienti c activities. With Moore's Law alive and well, more and more parallelism is introduced into all computing platforms, at all levels of integration and programming to achieve higher performance and energy e ciency. Being the geometric calibration process one of the most time consuming processes when using remote sensing images, the aim of this work is to accelerate this process by taking advantage of new computing architectures and technologies, specially focusing in exploiting computation over shared memory multi-threading hardware. A parallel implementation of the most time consuming process in the remote sensing geometric correction has been implemented using OpenMP directives. This work compares the performance of the original serial binary versus the parallelized implementation, using several multi-threaded modern CPU architectures, discussing about the approach to nd the optimum hardware for a cost-e ective execution.
Resumo:
In many research areas (such as public health, environmental contamination, and others) one deals with the necessity of using data to infer whether some proportion (%) of a population of interest is (or one wants it to be) below and/or over some threshold, through the computation of tolerance interval. The idea is, once a threshold is given, one computes the tolerance interval or limit (which might be one or two - sided bounded) and then to check if it satisfies the given threshold. Since in this work we deal with the computation of one - sided tolerance interval, for the two-sided case we recomend, for instance, Krishnamoorthy and Mathew [5]. Krishnamoorthy and Mathew [4] performed the computation of upper tolerance limit in balanced and unbalanced one-way random effects models, whereas Fonseca et al [3] performed it based in a similar ideas but in a tow-way nested mixed or random effects model. In case of random effects model, Fonseca et al [3] performed the computation of such interval only for the balanced data, whereas in the mixed effects case they dit it only for the unbalanced data. For the computation of twosided tolerance interval in models with mixed and/or random effects we recomend, for instance, Sharma and Mathew [7]. The purpose of this paper is the computation of upper and lower tolerance interval in a two-way nested mixed effects models in balanced data. For the case of unbalanced data, as mentioned above, Fonseca et al [3] have already computed upper tolerance interval. Hence, using the notions persented in Fonseca et al [3] and Krishnamoorthy and Mathew [4], we present some results on the construction of one-sided tolerance interval for the balanced case. Thus, in order to do so at first instance we perform the construction for the upper case, and then the construction for the lower case.
Resumo:
This work proposes an original contribution to the understanding of shermen spatial behavior, based on the behavioral ecology and movement ecology paradigms. Through the analysis of Vessel Monitoring System (VMS) data, we characterized the spatial behavior of Peruvian anchovy shermen at di erent scales: (1) the behavioral modes within shing trips (i.e., searching, shing and cruising); (2) the behavioral patterns among shing trips; (3) the behavioral patterns by shing season conditioned by ecosystem scenarios; and (4) the computation of maps of anchovy presence proxy from the spatial patterns of behavioral mode positions. At the rst scale considered, we compared several Markovian (hidden Markov and semi-Markov models) and discriminative models (random forests, support vector machines and arti cial neural networks) for inferring the behavioral modes associated with VMS tracks. The models were trained under a supervised setting and validated using tracks for which behavioral modes were known (from on-board observers records). Hidden semi-Markov models performed better, and were retained for inferring the behavioral modes on the entire VMS dataset. At the second scale considered, each shing trip was characterized by several features, including the time spent within each behavioral mode. Using a clustering analysis, shing trip patterns were classi ed into groups associated to management zones, eet segments and skippers' personalities. At the third scale considered, we analyzed how ecological conditions shaped shermen behavior. By means of co-inertia analyses, we found signi cant associations between shermen, anchovy and environmental spatial dynamics, and shermen behavioral responses were characterized according to contrasted environmental scenarios. At the fourth scale considered, we investigated whether the spatial behavior of shermen re ected to some extent the spatial distribution of anchovy. Finally, this work provides a wider view of shermen behavior: shermen are not only economic agents, but they are also foragers, constrained by ecosystem variability. To conclude, we discuss how these ndings may be of importance for sheries management, collective behavior analyses and end-to-end models.
Resumo:
We present formulas for computing the resultant of sparse polyno- mials as a quotient of two determinants, the denominator being a minor of the numerator. These formulas extend the original formulation given by Macaulay for homogeneous polynomials.
Resumo:
Tripping is considered a major cause of fall in older people. Therefore, foot clearance (i.e., height of the foot above ground during swing phase) could be a key factor to better understand the complex relationship between gait and falls. This paper presents a new method to estimate clearance using a foot-worn and wireless inertial sensor system. The method relies on the computation of foot orientation and trajectory from sensors signal data fusion, combined with the temporal detection of toe-off and heel-strike events. Based on a kinematic model that automatically estimates sensor position relative to the foot, heel and toe trajectories are estimated. 2-D and 3-D models are presented with different solving approaches, and validated against an optical motion capture system on 12 healthy adults performing short walking trials at self-selected, slow, and fast speed. Parameters corresponding to local minimum and maximum of heel and toe clearance were extracted and showed accuracy ± precision of 4.1 ± 2.3 cm for maximal heel clearance and 1.3 ± 0.9 cm for minimal toe clearance compared to the reference. The system is lightweight, wireless, easy to wear and to use, and provide a new and useful tool for routine clinical assessment of gait outside a dedicated laboratory.
Resumo:
Weather radar observations are currently the most reliable method for remote sensing of precipitation. However, a number of factors affect the quality of radar observations and may limit seriously automated quantitative applications of radar precipitation estimates such as those required in Numerical Weather Prediction (NWP) data assimilation or in hydrological models. In this paper, a technique to correct two different problems typically present in radar data is presented and evaluated. The aspects dealt with are non-precipitating echoes - caused either by permanent ground clutter or by anomalous propagation of the radar beam (anaprop echoes) - and also topographical beam blockage. The correction technique is based in the computation of realistic beam propagation trajectories based upon recent radiosonde observations instead of assuming standard radio propagation conditions. The correction consists of three different steps: 1) calculation of a Dynamic Elevation Map which provides the minimum clutter-free antenna elevation for each pixel within the radar coverage; 2) correction for residual anaprop, checking the vertical reflectivity gradients within the radar volume; and 3) topographical beam blockage estimation and correction using a geometric optics approach. The technique is evaluated with four case studies in the region of the Po Valley (N Italy) using a C-band Doppler radar and a network of raingauges providing hourly precipitation measurements. The case studies cover different seasons, different radio propagation conditions and also stratiform and convective precipitation type events. After applying the proposed correction, a comparison of the radar precipitation estimates with raingauges indicates a general reduction in both the root mean squared error and the fractional error variance indicating the efficiency and robustness of the procedure. Moreover, the technique presented is not computationally expensive so it seems well suited to be implemented in an operational environment.
Resumo:
In this work we present a method for the image analysisof Magnetic Resonance Imaging (MRI) of fetuses. Our goalis to segment the brain surface from multiple volumes(axial, coronal and sagittal acquisitions) of a fetus. Tothis end we propose a two-step approach: first, a FiniteGaussian Mixture Model (FGMM) will segment the image into3 classes: brain, non-brain and mixture voxels. Second, aMarkov Random Field scheme will be applied tore-distribute mixture voxels into either brain ornon-brain tissue. Our main contributions are an adaptedenergy computation and an extended neighborhood frommultiple volumes in the MRF step. Preliminary results onfour fetuses of different gestational ages will be shown.
Resumo:
A frequency-dependent compact model for inductors in high ohmic substrates, which is based on an energy point-of-view, is developed. This approach enables the description of the most important coupling phenomena that take place inside the device. Magnetically induced losses are quite accurately calculated and coupling between electric and magnetic fields is given by means of a delay constant. The later coupling phenomenon provides a modified procedure for the computation of the fringing capacitance value, when the self-resonance frequency of the inductor is used as a fitting parameter. The model takes into account the width of every metal strip and the pitch between strips. This enables the description of optimized layout inductors. Data from experiments and electromagnetic simulators are presented to test the accuracy of the model.
Resumo:
[spa] La participación del trabajo en la renta nacional es constante bajo los supuestos de una función de producción Cobb-Douglas y competencia perfecta. En este artículo se relajan estos supuestos y se investiga si el comportamiento no constante de la participación del trabajo en la renta nacional se explica por (i) una elasticidad de sustitución entre capital y trabajo no unitaria y (ii) competencia no perfecta en el mercado de producto. Nos centramos en España y los U.S. y estimamos una función de producción con elasticidad de sustitución constante y competencia imperfecta en el mercado de producto. El grado de competencia imperfecta se mide a través del cálculo del price markup basado en laaproximación dual. Mostramos que la elasticidad de sustitución es mayor que uno en España y menor que uno en los US. También mostramos que el price markup aleja la elasticidad de sustitución de uno, lo aumenta en España, lo reduce en los U.S. Estos resultados se utilizan para explicar la senda decreciente de la participación del trabajo en la renta nacional, común a ambas economías, y sus contrastadas sendas de capital.
Resumo:
An extension of the self-consistent field approach formulation by Cohen in the preceding paper is proposed in order to include the most general kind of two-body interactions, i.e., interactions depending on position, momenta, spin, isotopic spin, etc. The dielectric function is replaced by a dielectric matrix. The evaluation of the energies involves the computation of a matrix inversion and trace.
Resumo:
In this paper we examine in detail the implementation, with its associated difficulties, of the Killing conditions and gauge fixing into the variational principle formulation of Bianchi-type cosmologies. We address problems raised in the literature concerning the Lagrangian and the Hamiltonian formulations: We prove their equivalence, make clear the role of the homogeneity preserving diffeomorphisms in the phase space approach, and show that the number of physical degrees of freedom is the same in the Hamiltonian and Lagrangian formulations. Residual gauge transformations play an important role in our approach, and we suggest that Poincaré transformations for special relativistic systems can be understood as residual gauge transformations. In the Appendixes, we give the general computation of the equations of motion and the Lagrangian for any Bianchi-type vacuum metric and for spatially homogeneous Maxwell fields in a nondynamical background (with zero currents). We also illustrate our counting of degrees of freedom in an appendix.
Resumo:
We obtain the next-to-next-to-leading-logarithmic renormalization-group improvement of the spectrum of hydrogenlike atoms with massless fermions by using potential NRQED. These results can also be applied to the computation of the muonic hydrogen spectrum where we are able to reproduce some known double logarithms at O(m¿s6). We compare with other formalisms dealing with logarithmic resummation available in the literature.