151 resultados para Splines monotones
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This work aims to present the design and development of a speed reducer worm gear that will be implemented on an inclined treadmill that aims to raise the load below the top floor. The project start was made with research on issues related to mechanisms and machine elements, and these theories of fundamental importance in the development of device components, along with the help of SolidWorks software that was used to model the main parts of the project and Microsoft Office Excel 2007 was used to alight formulas to perform calculations of the project. All data for calculations were taken from the conditions of the problem to be solved in the best possible way the proposed problem (lifting load from the belt). Following the entire sequence of design gearbox assembly, beginning in pre-sizing and endless selection of electric motor, which consists of an iterative project, then scaling the worm gear and crown, shafts, splines, calculation and bearing selection
Resumo:
Background: Although linear growth during childhood may be affected by early-life exposures, few studies have examined whether the effects of these exposures linger on during school age, particularly in low-and middle-income countries. Methods: We conducted a population-based longitudinal study of 256 children living in the Brazilian Amazon, aged 0.1 y to 5.5 y in 2003. Data regarding socioeconomic and maternal characteristics, infant feeding practices, morbidities, and birth weight and length were collected at baseline of the study (2003). Child body length/height was measured at baseline and at follow-up visits (in 2007 and 2009). Restricted cubic splines were used to construct average height-for-age Z score (HAZ) growth curves, yielding estimated HAZ differences among exposure categories at ages 0.5 y, 1 y, 2 y, 5 y, 7 y, and 10 y. Results: At baseline, median age was 2.6 y (interquartile range, 1.4 y-3.8 y), and mean HAZ was -0.53 (standard deviation, 1.15); 10.2% of children were stunted. In multivariable analysis, children in households above the household wealth index median were 0.30 Z taller at age 5 y (P = 0.017), and children whose families owned land were 0.34 Z taller by age 10 y (P = 0.023), when compared with poorer children. Mothers in the highest tertile for height had children whose HAZ were significantly higher compared with those of children from mothers in the lowest height tertile at all ages. Birth weight and length were positively related to linear growth throughout childhood; by age 10 y, children weighing >3500 g at birth were 0.31 Z taller than those weighing 2501 g to 3500 g (P = 0.022) at birth, and children measuring >= 51 cm at birth were 0.51 Z taller than those measuring <= 48 cm (P = 0.005). Conclusions: Results suggest socioeconomic background is a potentially modifiable predictor of linear growth during the school-aged years. Maternal height and child's anthropometric characteristics at birth are positively associated with HAZ up until child age 10 y.
Resumo:
The objective of this study was to estimate (co)variance components using random regression on B-spline functions to weight records obtained from birth to adulthood. A total of 82 064 weight records of 8145 females obtained from the data bank of the Nellore Breeding Program (PMGRN/Nellore Brazil) which started in 1987, were used. The models included direct additive and maternal genetic effects and animal and maternal permanent environmental effects as random. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of age (cubic regression) were considered as random covariate. The random effects were modeled using B-spline functions considering linear, quadratic and cubic polynomials for each individual segment. Residual variances were grouped in five age classes. Direct additive genetic and animal permanent environmental effects were modeled using up to seven knots (six segments). A single segment with two knots at the end points of the curve was used for the estimation of maternal genetic and maternal permanent environmental effects. A total of 15 models were studied, with the number of parameters ranging from 17 to 81. The models that used B-splines were compared with multi-trait analyses with nine weight traits and to a random regression model that used orthogonal Legendre polynomials. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most appropriate and parsimonious model to describe the covariance structure of the data. Selection for higher weight, such as at young ages, should be performed taking into account an increase in mature cow weight. Particularly, this is important in most of Nellore beef cattle production systems, where the cow herd is maintained on range conditions. There is limited modification of the growth curve of Nellore cattle with respect to the aim of selecting them for rapid growth at young ages while maintaining constant adult weight.
Resumo:
Abstract Background With the development of DNA hybridization microarray technologies, nowadays it is possible to simultaneously assess the expression levels of thousands to tens of thousands of genes. Quantitative comparison of microarrays uncovers distinct patterns of gene expression, which define different cellular phenotypes or cellular responses to drugs. Due to technical biases, normalization of the intensity levels is a pre-requisite to performing further statistical analyses. Therefore, choosing a suitable approach for normalization can be critical, deserving judicious consideration. Results Here, we considered three commonly used normalization approaches, namely: Loess, Splines and Wavelets, and two non-parametric regression methods, which have yet to be used for normalization, namely, the Kernel smoothing and Support Vector Regression. The results obtained were compared using artificial microarray data and benchmark studies. The results indicate that the Support Vector Regression is the most robust to outliers and that Kernel is the worst normalization technique, while no practical differences were observed between Loess, Splines and Wavelets. Conclusion In face of our results, the Support Vector Regression is favored for microarray normalization due to its superiority when compared to the other methods for its robustness in estimating the normalization curve.
Resumo:
Resonant states are multiply excited states in atoms and ions that have enough energy to decay by emitting an electron. The ability to emit an electron and the strong electron correlation (which is extra strong in negative ions) makes these states both interesting and challenging from a theoretical point of view. The main contribution in this thesis is a method, which combines the use of B splines and complex rotation, to solve the three-electron Schrödinger equation treating all three electrons equally. It is used to calculate doubly excited and triply excited states of 4S symmetry with even parity in He-. For the doubly excited states there are experimental and theoretical data to compare with. For the triply excited states there is only theoretical data available and only for one of the resonances. The agreement is in general good. For the triply excited state there is a significant and interesting difference in the width between our calculation and another method. A cause for this deviation is suggested. The method is also used to find a resonant state of 4S symmetry with odd parity in H2-. This state, in this extremely negative system, has been predicted by two earlier calculations but is highly controversial. Several other studies presented here focus on two-electron systems. In one, the effect of the splitting of the degenerate H(n=2) thresholds in H-, on the resonant states converging to this threshold, is studied. If a completely degenerate threshold is assumed an infinite series of states is expected to converge to the threshold. Here states of 1P symmetry and odd parity are examined, and it is found that the relativistic and radiative splitting of the threshold causes the series to end after only three resonant states. Since the independent particle model completely fails for doubly excited states, several schemes of alternative quantum numbers have been suggested. We investigate the so called DESB (Doubly Excited Symmetry Basis) quantum numbers in several calculations. For the doubly excited states of He- mentioned above we investigate one resonance and find that it cannot be assigned DESB quantum numbers unambiguously. We also investigate these quantum numbers for states of 1S even parity in He. We find two types of mixing of DESB states in the doubly excited states calculated. We also show that the amount of mixing of DESB quantum numbers can be inferred from the value of the cosine of the inter-electronic angle. In a study on Li- the calculated cosine values are used to identify doubly excited states measured in a photodetachment experiment. In particular a resonant state that violates a propensity rule is found.
Resumo:
[EN]In this work we develop a procedure to deform a given surface triangulation to obtain its alignment with interior curves. These curves are defined by splines in a parametric space and, subsequently, mapped to the surface triangulation. We have restricted our study to orthogonal mapping, so we require the curves to be included in a patch of the surface that can be orthogonally projected onto a plane (our parametric space). For example, the curves can represent interfaces between different materials or boundary conditions, internal boundaries or feature lines. Another setting in which this procedure can be used is the adaption of a reference mesh to changing curves in the course of an evolutionary process...
Resumo:
[EN]The application of the Isogeometric Analysis (IA) with T-splines [1] demands a partition of the parametric space, C, in a tiling containing T-junctions denominated T-mesh. The T-splines are used both for the geometric modelization of the physical domain, D, and the basis of the numerical approximation. They have the advantage over the NURBS of allowing local refinement. In this work we propose a procedure to construct T-spline representations of complex domains in order to be applied to the resolution of elliptic PDE with IA. In precedent works [2, 3] we accomplished this task by using a tetrahedral parametrization…
Resumo:
Congresos y conferencias
Resumo:
[EN]Isogeometric analysis (IGA) has arisen as an attempt to unify the fields of CAD and classical finite element methods. The main idea of IGA consists in using for analysis the same functions (splines) that are used in CAD representation of the geometry. The main advantage with respect to the traditional finite element method is a higher smoothness of the numerical solution and more accurate representation of the geometry. IGA seems to be a promising tool with wide range of applications in engineering. However, this relatively new technique have some open problems that require a solution. In this work we present our results and contributions to this issue…
Resumo:
In der vorliegenden Arbeit werden zwei physikalischeFließexperimente an Vliesstoffen untersucht, die dazu dienensollen, unbekannte hydraulische Parameter des Materials, wiez. B. die Diffusivitäts- oder Leitfähigkeitsfunktion, ausMeßdaten zu identifizieren. Die physikalische undmathematische Modellierung dieser Experimente führt auf einCauchy-Dirichlet-Problem mit freiem Rand für die degeneriertparabolische Richardsgleichung in derSättigungsformulierung, das sogenannte direkte Problem. Ausder Kenntnis des freien Randes dieses Problems soll dernichtlineare Diffusivitätskoeffizient derDifferentialgleichung rekonstruiert werden. Für diesesinverse Problem stellen wir einOutput-Least-Squares-Funktional auf und verwenden zu dessenMinimierung iterative Regularisierungsverfahren wie dasLevenberg-Marquardt-Verfahren und die IRGN-Methode basierendauf einer Parametrisierung des Koeffizientenraumes durchquadratische B-Splines. Für das direkte Problem beweisen wirunter anderem Existenz und Eindeutigkeit der Lösung desCauchy-Dirichlet-Problems sowie die Existenz des freienRandes. Anschließend führen wir formal die Ableitung desfreien Randes nach dem Koeffizienten, die wir für dasnumerische Rekonstruktionsverfahren benötigen, auf einlinear degeneriert parabolisches Randwertproblem zurück.Wir erläutern die numerische Umsetzung und Implementierungunseres Rekonstruktionsverfahrens und stellen abschließendRekonstruktionsergebnisse bezüglich synthetischer Daten vor.
Resumo:
This thesis introduces new processing techniques for computer-aided interpretation of ultrasound images with the purpose of supporting medical diagnostic. In terms of practical application, the goal of this work is the improvement of current prostate biopsy protocols by providing physicians with a visual map overlaid over ultrasound images marking regions potentially affected by disease. As far as analysis techniques are concerned, the main contributions of this work to the state-of-the-art is the introduction of deconvolution as a pre-processing step in the standard ultrasonic tissue characterization procedure to improve the diagnostic significance of ultrasonic features. This thesis also includes some innovations in ultrasound modeling, in particular the employment of a continuous-time autoregressive moving-average (CARMA) model for ultrasound signals, a new maximum-likelihood CARMA estimator based on exponential splines and the definition of CARMA parameters as new ultrasonic features able to capture scatterers concentration. Finally, concerning the clinical usefulness of the developed techniques, the main contribution of this research is showing, through a study based on medical ground truth, that a reduction in the number of sampled cores in standard prostate biopsy is possible, preserving the same diagnostic power of the current clinical protocol.
Resumo:
Die Drei-Spektrometer-Anlage am Mainzer Institut für Kernphysik wurde um ein zusätzliches Spektrometer ergänzt, welches sich durch seine kurze Baulänge auszeichnet und deshalb Short-Orbit-Spektrometer (SOS) genannt wird. Beim nominellen Abstand des SOS vom Target (66 cm) legen die nachzuweisenden Teilchen zwischen Reaktionsort und Detektor eine mittlere Bahnlänge von 165 cm zurück. Für die schwellennahe Pionproduktion erhöht sich dadurch im Vergleich zu den großen Spektrometern die Überlebenswahrscheinlichkeit geladener Pionen mit Impuls 100 MeV/c von 15% auf 73%. Demzufolge verringert sich der systematische Fehler ("Myon-Kontamination"), etwa bei der geplanten Messung der schwachen Formfaktoren G_A(Q²) und G_P(Q²), signifikant. Den Schwerpunkt der vorliegenden Arbeit bildet die Driftkammer des SOS. Ihre niedrige Massenbelegung (0,03% X_0) zur Reduzierung der Kleinwinkelstreuung ist auf den Nachweis niederenergetischer Pionen hin optimiert. Aufgrund der neuartigen Geometrie des Detektors musste eine eigene Software zur Spurrekonstruktion, Effizienzbestimmung etc. entwickelt werden. Eine komfortable Möglichkeit zur Eichung der Driftweg-Driftzeit-Relation, die durch kubische Splines dargestellt wird, wurde implementiert. Das Auflösungsvermögen des Spurdetektors liegt in der dispersiven Ebene bei 76 µm für die Orts- und 0,23° für die Winkelkoordinate (wahrscheinlichster Fehler) sowie entsprechend in der nicht-dispersiven Ebene bei 110 µm bzw. 0,29°. Zur Rückrechnung der Detektorkoordinaten auf den Reaktionsort wurde die inverse Transfermatrix des Spektrometers bestimmt. Hierzu wurden an Protonen im ¹²C-Kern quasielastisch gestreute Elektronen verwendet, deren Startwinkel durch einen Lochkollimator definiert wurden. Daraus ergeben sich experimentelle Werte für die mittlere Winkelauflösung am Target von sigma_phi = 1,3 mrad bzw. sigma_theta = 10,6 mrad. Da die Impulseichung des SOS nur mittels quasielastischer Streuung (Zweiarmexperiment) durchgeführt werden kann, muss man den Beitrag des Protonarms zur Breite des Piks der fehlenden Masse in einer Monte-Carlo-Simulation abschätzen und herausfalten. Zunächst lässt sich nur abschätzen, dass die Impulsauflösung sicher besser als 1% ist.
Resumo:
Le superfici di suddivisione sono un ottimo ed importante strumento utilizzato principalmente nell’ambito dell’animazione 3D poichè consentono di definire superfici di forma arbitraria. Questa tecnologia estende il concetto di B-spline e permette di avere un’estrema libertà dei vincoli topologici. Per definire superfici di forma arbitraria esistono anche le Non-Uniform Rational B-Splines (NURBS) ma non lasciano abbastanza libertà per la costruzione di forme libere. Infatti, a differenza delle superfici di suddivisione, hanno bisogno di unire vari pezzi della superficie (trimming). La tecnologia NURBS quindi viene utilizzata prevalentemente negli ambienti CAD mentre nell’ambito della Computer Graphics si è diffuso ormai da più di 30 anni, l’utilizzo delle superfici di suddivisione. Lo scopo di questa tesi è quello di riassumere, quindi, i concetti riguardo questa tecnologia, di analizzare alcuni degli schemi di suddivisione più utilizzati e parlare brevemente di come questi schemi ed algoritmi vengono utilizzati nella realt`a per l’animazione 3D.