946 resultados para Tree solution method
Resumo:
A method for formulating and algorithmically solving the equations of finite element problems is presented. The method starts with a parametric partition of the domain in juxtaposed strips that permits sweeping the whole region by a sequential addition (or removal) of adjacent strips. The solution of the difference equations constructed over that grid proceeds along with the addition removal of strips in a manner resembling the transfer matrix approach, except that different rules of composition that lead to numerically stable algorithms are used for the stiffness matrices of the strips. Dynamic programming and invariant imbedding ideas underlie the construction of such rules of composition. Among other features of interest, the present methodology provides to some extent the analyst's control over the type and quantity of data to be computed. In particular, the one-sweep method presented in Section 9, with no apparent counterpart in standard methods, appears to be very efficient insofar as time and storage is concerned. The paper ends with the presentation of a numerical example
Resumo:
In this paper the dynamics of axisymmetric, slender, viscous liquid bridges having volume close to the cylindrical one, and subjected to a small gravitational field parallel to the axis of the liquid bridge, is considered within the context of one-dimensional theories. Although the dynamics of liquid bridges has been treated through a numerical analysis in the inviscid case, numerical methods become inappropriate to study configurations close to the static stability limit because the evolution time, and thence the computing time, increases excessively. To avoid this difficulty, the problem of the evolution of these liquid bridges has been attacked through a nonlinear analysis based on the singular perturbation method and, whenever possible, the results obtained are compared with the numerical ones.
Resumo:
En esta tesis, el método de estimación de error de truncación conocido como restimation ha sido extendido de esquemas de bajo orden a esquemas de alto orden. La mayoría de los trabajos en la bibliografía utilizan soluciones convergidas en mallas de distinto refinamiento para realizar la estimación. En este trabajo se utiliza una solución en una única malla con distintos órdenes polinómicos. Además, no se requiere que esta solución esté completamente convergida, resultando en el método conocido como quasi-a priori T-estimation. La aproximación quasi-a priori estima el error mientras el residuo del método iterativo no es despreciable. En este trabajo se demuestra que algunas de las hipótesis fundamentales sobre el comportamiento del error, establecidas para métodos de bajo orden, dejan de ser válidas en esquemas de alto orden, haciendo necesaria una revisión completa del comportamiento del error antes de redefinir el algoritmo. Para facilitar esta tarea, en una primera etapa se considera el método conocido como Chebyshev Collocation, limitando la aplicación a geometrías simples. La extensión al método Discontinuouos Galerkin Spectral Element Method presenta dificultades adicionales para la definición precisa y la estimación del error, debidos a la formulación débil, la discretización multidominio y la formulación discontinua. En primer lugar, el análisis se enfoca en leyes de conservación escalares para examinar la precisión de la estimación del error de truncación. Después, la validez del análisis se demuestra para las ecuaciones incompresibles y compresibles de Euler y Navier Stokes. El método de aproximación quasi-a priori r-estimation permite desacoplar las contribuciones superficiales y volumétricas del error de truncación, proveyendo información sobre la anisotropía de las soluciones así como su ratio de convergencia con el orden polinómico. Se demuestra que esta aproximación quasi-a priori produce estimaciones del error de truncación con precisión espectral. ABSTRACT In this thesis, the τ-estimation method to estimate the truncation error is extended from low order to spectral methods. While most works in the literature rely on fully time-converged solutions on grids with different spacing to perform the estimation, only one grid with different polynomial orders is used in this work. Furthermore, a non timeconverged solution is used resulting in the quasi-a priori τ-estimation method. The quasi-a priori approach estimates the error when the residual of the time-iterative method is not negligible. It is shown in this work that some of the fundamental assumptions about error tendency, well established for low order methods, are no longer valid in high order schemes, making necessary a complete revision of the error behavior before redefining the algorithm. To facilitate this task, the Chebyshev Collocation Method is considered as a first step, limiting their application to simple geometries. The extension to the Discontinuous Galerkin Spectral Element Method introduces additional features to the accurate definition and estimation of the error due to the weak formulation, multidomain discretization and the discontinuous formulation. First, the analysis focuses on scalar conservation laws to examine the accuracy of the estimation of the truncation error. Then, the validity of the analysis is shown for the incompressible and compressible Euler and Navier Stokes equations. The developed quasi-a priori τ-estimation method permits one to decouple the interfacial and the interior contributions of the truncation error in the Discontinuous Galerkin Spectral Element Method, and provides information about the anisotropy of the solution, as well as its rate of convergence in polynomial order. It is demonstrated here that this quasi-a priori approach yields a spectrally accurate estimate of the truncation error.
Resumo:
The two-body problem subject to a constant radial thrust is analyzed as a planar motion. The description of the problem is performed in terms of three perturbation methods: DROMO and two others due to Deprit. All of them rely on Hansen?s ideal frame concept. An explicit, analytic, closed-form solution is obtained for this problem when the initial orbit is circular (Tsien problem), based on the DROMO special perturbation method, and expressed in terms of elliptic integral functions. The analytical solution to the Tsien problem is later used as a reference to test the numerical performance of various orbit propagation methods, including DROMO and Deprit methods, as well as Cowell and Kustaanheimo?Stiefel methods.
Resumo:
In this work, we demonstrate how it is possible to sharply image multiple object points. The Simultaneous Multiple Surface (SMS) design method has usually been presented as a method to couple N wave-front pairs with N surfaces, but recent findings show that when using N surfaces, we can obtain M image points when N
Resumo:
In this article, an approximate analytical solution for the two body problem perturbed by a radial, low acceleration is obtained, using a regularized formulation of the orbital motion and the method of multiple scales. The results reveal that the physics of the problem evolve in two fundamental scales of the true anomaly. The first one drives the oscillations of the orbital parameters along each orbit. The second one is responsible of the long-term variations in the amplitude and mean values of these oscillations. A good agreement is found with high precision numerical solutions.
Resumo:
One of the biggest challenges that software developers face is to make an accurate estimate of the project effort. Radial basis function neural networks have been used to software effort estimation in this work using NASA dataset. This paper evaluates and compares radial basis function versus a regression model. The results show that radial basis function neural network have obtained less Mean Square Error than the regression method.
Resumo:
Concentrating Photovoltaics (CPV) is an alternative to flat-plate module photovoltaic (PV) technology. The bankability of CPV projects is an important issue to pave the way toward a swift and sustained growth in this technology. The bankability of a PV plant is generally addressed through the modeling of its energy yield under a baseline loss scenario, followed by an on-site measurement campaign aimed at verifying its energy performance. This paper proposes a procedure for assessing the performance of a CPV project, articulated around four main successive steps: Solar Resource Assessment, Yield Assessment, Certificate of Provisional Acceptance, and Certificate of Final Acceptance. This methodology allows the long-term energy production of a CPV project to be estimated with an associated uncertainty of ≈5%. To our knowledge, no such method has been proposed to the CPV industry yet, and this critical situation has hindered or made impossible the completion of several important CPV projects undertaken in the world. The main motive for this proposed method is to bring a practical solution to this urgent problem. This procedure can be operated under a wide range of climatic conditions, and makes it possible to assess the bankability of a CPV plant whose design uses any of the technologies currently available on the market. The method is also compliant with both international standards and local regulations. In consequence, its applicability is both general and international.
Resumo:
Shopfloor Management (SM) empowerment methodologies have traditionally focused on two aspects: goal achievement following rigid structures, such as SQDCME, or evolutional aspects of empowerment factors away from strategic goal achievement. Furthermore, SM Methodologies have been organized almost solely around the hierarchical structure of the organization, failing systematically to cope with the challenges that Industry 4.0 is facing. The latter include the growing complexity of value-stream networks, sustainable empowerment of the workforce (Learning Factory), an autonomous and intelligent process management (Smart Factory), the need to cope with the increasing complexity of value-stream networks (VSN) and the leadership paradigm shift to strategic alignment. This paper presents a novel Lean SM Method (LSM) called ?HOSHIN KANRI Tree? (HKT), which is based on standardization of the communication patterns among process owners (POs) by PDCA. The standardization of communication patterns by HKT technology should bring enormous benefits in value stream (VS) performance, speed of standardization and learning rates to the Industry 4.0 generation of organizations. These potential advantages of HKT are being tested at present in worldwide research.
Resumo:
Vector reconstruction of objects from an unstructured point cloud obtained with a LiDAR-based system (light detection and ranging) is one of the most promising methods to build three dimensional models of orchards. The cylinder fitting method for woody structure reconstruction of leafless trees from point clouds obtained with a mobile terrestrial laser scanner (MTLS) has been analysed. The advantage of this method is that it performs reconstruction in a single step. The most time consuming part of the algorithm is generation of the cylinder direction, which must be recalculated at the inclusion of each point in the cylinder. The tree skeleton is obtained at the same time as the cluster of cylinders is formed. The method does not guarantee a unique convergence and the reconstruction parameter values must be carefully chosen. A balanced processing of clusters has also been defined which has proven to be very efficient in terms of processing time by following the hierarchy of branches, predecessors and successors. The algorithm was applied to simulated MTLS of virtual orchard models and to MTLS data of real orchards. The constraints applied in the method have been reviewed to ensure better convergence and simpler use of parameters. The results obtained show a correct reconstruction of the woody structure of the trees and the algorithm runs in linear logarithmic time
Resumo:
A panel method free-wake model to analyse the rotor flapping is presented. The aerodynamic model consists of a panel method, which takes into account the three-dimensional rotor geometry, and a free-wake model, to determine the wake shape. The main features of the model are the wake division into a near-wake sheet and a far wake represented by a single tip vortex, and the modification of the panel method formulation to take into account this particular wake description. The blades are considered rigid with a flap degree of freedom. The problem solution is approached using a relaxation method, which enforces periodic boundary conditions. Finally, several code validations against helicopter and wind turbine experimental data are performed, showing good agreement
Resumo:
The use of molecular genetics for introducing fluorescent molecules enables the use of donor–donor energy migration to determine intramolecular distances in a variety of proteins. This approach can be applied to examine the overall molecular dimensions of proteins and to investigate structural changes upon interactions with specific target molecules. In this report, the donor–donor energy migration method is demonstrated by experiments with the latent form of plasminogen activator inhibitor type 1. Based on the known x-ray structure of plasminogen activator inhibitor type 1, three positions forming the corners of a triangle were chosen. Double Cys substitution mutants (V106C-H185C, H185C-M266C, and M266C-V106C) and corresponding single substitution mutants (V106C, H185C, and M266C) were created and labeled with a sulfhydryl specific derivative of BODIPY (=the D molecule). The side lengths of this triangle were obtained from analyses of the experimental data. The analyses account for the local anisotropic order and rotational motions of the D molecules, as well as for the influence of a partial DD-labeling. The distances, as determined from x-ray diffraction, between the Cα-atoms of the positions V106C–H185C, H185C–M266C, and M266C–V106C were 60.9, 30.8, and 55.1 Å, respectively. These are in good agreement with the distances of 54 ± 4, 38 ± 3, and 55 ± 3 Å, as determined between the BODIPY groups attached via linkers to the same residues. Although the positions of the D-molecules and the Cα-atoms physically cannot coincide, there is a reasonable agreement between the methods.
Resumo:
We report a general method for screening, in solution, the impact of deviations from canonical Watson-Crick composition on the thermodynamic stability of nucleic acid duplexes. We demonstrate how fluorescence resonance energy transfer (FRET) can be used to detect directly free energy differences between an initially formed “reference” duplex (usually a Watson-Crick duplex) and a related “test” duplex containing a lesion/alteration of interest (e.g., a mismatch, a modified, a deleted, or a bulged base, etc.). In one application, one titrates into a solution containing a fluorescently labeled, FRET-active, reference duplex, an unlabeled, single-stranded nucleic acid (test strand), which may or may not compete successfully to form a new duplex. When a new duplex forms by strand displacement, it will not exhibit FRET. The resultant titration curve (normalized fluorescence intensity vs. logarithm of test strand concentration) yields a value for the difference in stability (free energy) between the newly formed, test strand-containing duplex and the initial reference duplex. The use of competitive equilibria in this assay allows the measurement of equilibrium association constants that far exceed the magnitudes accessible by conventional titrimetric techniques. Additionally, because of the sensitivity of fluorescence, the method requires several orders of magnitude less material than most other solution methods. We discuss the advantages of this method for detecting and characterizing any modification that alters duplex stability, including, but not limited to, mutagenic lesions. We underscore the wide range of accessible free energy values that can be defined by this method, the applicability of the method in probing for a myriad of nucleic acid variations, such as single nucleotide polymorphisms, and the potential of the method for high throughput screening.
Resumo:
Given a pool of motorists, how do we estimate the total intensity of those who had a prespecified number of traffic accidents in the past year? We previously have proposed the u,v method as a solution to estimation problems of this type. In this paper, we prove that the u,v method provides asymptotically efficient estimators in an important special case.
Resumo:
19F nuclear Overhauser effects (NOEs) between fluorine labels on the cytoplasmic domain of rhodopsin solubilized in detergent micelles are reported. Previously, high-resolution solution 19F NMR spectra of fluorine-labeled rhodopsin in detergent micelles were described, demonstrating the applicability of this technique to studies of tertiary structure in the cytoplasmic domain. To quantitate tertiary contacts we have applied a transient one-dimensional difference NOE solution 19F NMR experiment to this system, permitting assessment of proximities between fluorine labels specifically incorporated into different regions of the cytoplasmic face. Three dicysteine substitution mutants (Cys-140–Cys-316, Cys-65–Cys-316, and Cys-139–Cys-251) were labeled by attachment of the trifluoroethylthio group through a disulfide linkage. Each mutant rhodopsin was prepared (8–10 mg) in dodecylmaltoside and analyzed at 20°C by solution 19F NMR. Distinct chemical shifts were observed for all of the rhodopsin 19F labels in the dark. An up-field shift of the Cys-316 resonance in the Cys-65–Cys-316 mutant suggests a close proximity between the two residues. When analyzed for 19F-19F NOEs, a moderate negative enhancement was observed for the Cys-65–Cys-316 pair and a strong negative enhancement was observed for the Cys-139–Cys-251 pair, indicating proximity between these sites. No NOE enhancement was observed for the Cys-140–Cys-316 pair. These NOE effects demonstrate a solution 19F NMR method for analysis of tertiary contacts in high molecular weight proteins, including membrane proteins.