819 resultados para robust mean


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Over the last decade, molecular phylogenetics has called into question some fundamental aspects of coral systematics. Within the Scleractinia, most families composed exclusively by zooxanthellate species are polyphyletic on the basis of molecular data, and the second most speciose coral family, the Caryophylliidae (most members of which are azooxanthellate), is an unnatural grouping. As part of the process of resolving taxonomic affinities of caryophylliids', here a new Robust' scleractinian family (Deltocyathiidae fam. n.) is proposed on the basis of combined molecular (CO1 and 28S rDNA) and morphological data, accommodating the early-diverging clade of traditional caryophylliids (represented today by the genus Deltocyathus). Whereas this family captures the full morphological diversity of the genus Deltocyathus, one species, Deltocyathus magnificus, is an outlier in terms of molecular data, and groups with the Complex coral family Turbinoliidae. Ultrastructural data, however, place D.magnificus within Deltocyathiidae fam. nov. Unfortunately, limited ultrastructural data are as yet available for turbinoliids, but D.magnificus may represent the first documented case of morphological convergence at the microstructural level among scleractinian corals. Marcelo V.Kitahara, Centro de Biologia Marinha, Universidade de SAo Paulo, SAo SebastiAo, S.P. 11600-000, Brazil. E-mail:kitahara@usp.br

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aims of this work are: (i) to produce new experimental data for fretting fatigue considering the presence of a mean bulk stress and (ii) to assess two design methodologies against failure by fretting fatigue. Tests on a cylinder–flat contact configuration were conducted using a fretting apparatus mounted on a servo-hydraulic machine. The material used for both the pads and fatigue specimen was an aeronautical 7050-T7451 Al alloy. The experimental program was designed with all relevant parameters, apart from the mean bulk load (always applied before the contact loads), kept constant. The mean bulk stress varied from compressive to tensile values while maintaining a high peak pressure in order to encourage crack initiation. Two methodologies against fretting fatigue are proposed and confronted against the experimental data. The non-local stress-based methodology considers the evaluation of a critical plane fatigue criterion at the center of a process zone located beneath the contacting surfaces. The results showed that it correctly predicts crack initiation, but was not capable to provide successful prediction of the integrity of the specimens. Alternatively, we considered a crack arrest criterion which has the potential to provide a more complete description about the integrity of the specimens.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A detailed numerical simulation of ethanol turbulent spray combustion on a rounded jet flame is pre- sented in this article. The focus is to propose a robust mathematical model with relatively low complexity sub- models to reproduce the main characteristics of the cou- pling between both phases, such as the turbulence modulation, turbulent droplets dissipation, and evaporative cooling effect. A RANS turbulent model is implemented. Special features of the model include an Eulerian– Lagrangian procedure under a fully two-way coupling and a modified flame sheet model with a joint mixture fraction– enthalpy b -PDF. Reasonable agreement between measured and computed mean profiles of temperature of the gas phase and droplet size distributions is achieved. Deviations found between measured and predicted mean velocity profiles are attributed to the turbulent combustion modeling adopted

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective To compare autoantibody features in patients with primary biliary cirrhosis (PBC) and individuals presenting antimitochondria antibodies (AMAs) but no clinical or biochemical evidence of disease. Methods A total of 212 AMA-positive serum samples were classified into four groups: PBC (definite PBC, n = 93); PBC/autoimmune disease (AID; PBC plus other AID, n = 37); biochemically normal (BN) individuals (n = 61); and BN/AID (BN plus other AID, n = 21). Samples were tested by indirect immunofluorescence (IIF) on rat kidney (IIF-AMA) and ELISA [antibodies to pyruvate dehydrogenase E2-complex (PDC-E2), gp-210, Sp-100, and CENP-A/B]. AMA isotype was determined by IIF-AMA. Affinity of anti-PDC-E2 IgG was determined by 8 M urea-modified ELISA. Results High-titer IIF-AMA was more frequent in PBC and PBC/AID (57 and 70 %) than in BN and BN/AID samples (23 and 19 %) (p < 0.001). Triple isotype IIF-AMA (IgA/IgM/IgG) was more frequent in PBC and PBC/AID samples (35 and 43 %) than in BN sample (18 %; p = 0.008; p = 0.013, respectively). Anti-PDC-E2 levels were higher in PBC (mean 3.82; 95 % CI 3.36–4.29) and PBC/AID samples (3.89; 3.15–4.63) than in BN (2.43; 1.92–2.94) and BN/AID samples (2.52; 1.54–3.50) (p < 0.001). Anti-PDC-E2 avidity was higher in PBC (mean 64.5 %; 95 % CI 57.5–71.5 %) and PBC/AID samples (66.1 %; 54.4–77.8 %) than in BN samples (39.2 %; 30.9–37.5 %) (p < 0.001). PBC and PBC/AID recognized more cell domains (mitochondria, nuclear envelope, PML/sp-100 bodies, centromere) than BN (p = 0.008) and BN/AID samples (p = 0.002). Three variables were independently associated with established PBC: high-avidity anti-PDC-E2 (OR 4.121; 95 % CI 2.118–8.019); high-titer IIF-AMA (OR 4.890; 2.319–10.314); antibodies to three or more antigenic cell domains (OR 9.414; 1.924–46.060). Conclusion The autoantibody profile was quantitatively and qualitatively more robust in definite PBC as compared with AMA-positive biochemically normal individuals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Se prueba que para dos funciones continuas reales, una determinada ecuación posee solució única. Se presenta también una generalización a integrales con peso.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN] The accuracy and performance of current variational optical ow methods have considerably increased during the last years. The complexity of these techniques is high and enough care has to be taken for the implementation. The aim of this work is to present a comprehensible implementation of recent variational optical flow methods. We start with an energy model that relies on brightness and gradient constancy terms and a ow-based smoothness term. We minimize this energy model and derive an e cient implicit numerical scheme. In the experimental results, we evaluate the accuracy and performance of this implementation with the Middlebury benchmark database. We show that it is a competitive solution with respect to current methods in the literature. In order to increase the performance, we use a simple strategy to parallelize the execution on multi-core processors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN] In this work, we describe an implementation of the variational method proposed by Brox et al. in 2004, which yields accurate optical flows with low running times. It has several benefits with respect to the method of Horn and Schunck: it is more robust to the presence of outliers, produces piecewise-smooth flow fields and can cope with constant brightness changes. This method relies on the brightness and gradient constancy assumptions, using the information of the image intensities and the image gradients to find correspondences. It also generalizes the use of continuous L1 functionals, which help mitigate the efect of outliers and create a Total Variation (TV) regularization. Additionally, it introduces a simple temporal regularization scheme that enforces a continuous temporal coherence of the flow fields.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We analyse the influence of colour information in optical flow methods. Typically, most of these techniques compute their solutions using grayscale intensities due to its simplicity and faster processing, ignoring the colour features. However, the current processing systems have minimized their computational cost and, on the other hand, it is reasonable to assume that a colour image offers more details from the scene which should facilitate finding better flow fields. The aim of this work is to determine if a multi-channel approach supposes a quite enough improvement to justify its use. In order to address this evaluation, we use a multi-channel implementation of a well-known TV-L1 method. Furthermore, we review the state-of-the-art in colour optical flow methods. In the experiments, we study various solutions using grayscale and RGB images from recent evaluation datasets to verify the colour benefits in motion estimation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN] This paper proposes the incorporation of engineering knowledge through both (a) advanced state-of-the-art preference handling decision-making tools integrated in multiobjective evolutionary algorithms and (b) engineering knowledge-based variance reduction simulation as enhancing tools for the robust optimum design of structural frames taking uncertainties into consideration in the design variables.The simultaneous minimization of the constrained weight (adding structuralweight and average distribution of constraint violations) on the one hand and the standard deviation of the distribution of constraint violation on the other are handled with multiobjective optimization-based evolutionary computation in two different multiobjective algorithms. The optimum design values of the deterministic structural problem in question are proposed as a reference point (the aspiration level) in reference-point-based evolutionary multiobjective algorithms (here g-dominance is used). Results including

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work deals with some classes of linear second order partial differential operators with non-negative characteristic form and underlying non- Euclidean structures. These structures are determined by families of locally Lipschitz-continuous vector fields in RN, generating metric spaces of Carnot- Carath´eodory type. The Carnot-Carath´eodory metric related to a family {Xj}j=1,...,m is the control distance obtained by minimizing the time needed to go from two points along piecewise trajectories of vector fields. We are mainly interested in the causes in which a Sobolev-type inequality holds with respect to the X-gradient, and/or the X-control distance is Doubling with respect to the Lebesgue measure in RN. This study is divided into three parts (each corresponding to a chapter), and the subject of each one is a class of operators that includes the class of the subsequent one. In the first chapter, after recalling “X-ellipticity” and related concepts introduced by Kogoj and Lanconelli in [KL00], we show a Maximum Principle for linear second order differential operators for which we only assume a Sobolev-type inequality together with a lower terms summability. Adding some crucial hypotheses on measure and on vector fields (Doubling property and Poincar´e inequality), we will be able to obtain some Liouville-type results. This chapter is based on the paper [GL03] by Guti´errez and Lanconelli. In the second chapter we treat some ultraparabolic equations on Lie groups. In this case RN is the support of a Lie group, and moreover we require that vector fields satisfy left invariance. After recalling some results of Cinti [Cin07] about this class of operators and associated potential theory, we prove a scalar convexity for mean-value operators of L-subharmonic functions, where L is our differential operator. In the third chapter we prove a necessary and sufficient condition of regularity, for boundary points, for Dirichlet problem on an open subset of RN related to sub-Laplacian. On a Carnot group we give the essential background for this type of operator, and introduce the notion of “quasi-boundedness”. Then we show the strict relationship between this notion, the fundamental solution of the given operator, and the regularity of the boundary points.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis deals with an investigation of combinatorial and robust optimisation models to solve railway problems. Railway applications represent a challenging area for operations research. In fact, most problems in this context can be modelled as combinatorial optimisation problems, in which the number of feasible solutions is finite. Yet, despite the astonishing success in the field of combinatorial optimisation, the current state of algorithmic research faces severe difficulties with highly-complex and data-intensive applications such as those dealing with optimisation issues in large-scale transportation networks. One of the main issues concerns imperfect information. The idea of Robust Optimisation, as a way to represent and handle mathematically systems with not precisely known data, dates back to 1970s. Unfortunately, none of those techniques proved to be successfully applicable in one of the most complex and largest in scale (transportation) settings: that of railway systems. Railway optimisation deals with planning and scheduling problems over several time horizons. Disturbances are inevitable and severely affect the planning process. Here we focus on two compelling aspects of planning: robust planning and online (real-time) planning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Kernmomente und Kernladungsradien von kurzlebigen NeonIsotopen in der Kette 17-26,28Ne wurden mittels kollinearerLaserspektroskopie am online Massenseparator ISOLDE am CERN(Genf) vermessen. Bei kollinearer Laserspektroskopieverlangt die Bestimmung der Kernladungsradien leichterIsotope aus der Isotopeverschiebung nach einer sehr präzisenKenntnis der Ionenstrahlenergie. Zu diesem Zweck wurde eineneue, auf kollinearer Laserspektroskopie basierende Methodezur Strahlenergiemessung entwickelt und erfolgreich in denExperimenten zu Neon eingesetzt. Die experimentellenErgebnisse werden mit theoretischen Rechnungen im Rahmen desSchalenmodells und von kollektiven Kernmodellen verglichen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.