974 resultados para Point method


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Background and Objectives: Improved ultrasound and needle technology make popliteal sciatic nerve blockade a popular anesthetic technique and imaging to localize the branch point of the common peroneal and posterior tibial components is important because successful blockade techniques vary with respect to injection of the common trunk proximally or separate injections distally. Nerve stimulation, ultrasound, cadaveric and magnetic resonance studies demonstrate variability in distance and discordance between imaging and anatomic examination of the branch point. The popliteal crease and imprecise, inaccessible landmarks render measurement of the branch point variable and inaccurate. The purpose of this study was to use the tibial tuberosity, a fixed bony reference, to measure the distance of the branch point. Method: During popliteal sciatic nerve blockade in the supine position the branch point was identified by ultrasound and the block needle was inserted. The vertical distance from the tibial tuberosity prominence and needle insertion point was measured. Results: In 92 patients the branch point is a mean distance of 12.91 cm proximal to the tibial tuberosity and more proximal in male (13.74 cm) than female patients (12.08 cm). Body height is related to the branch point distance and is more proximal in taller patients. Separation into two nerve branches during local anesthetic injection supports notions of more proximal neural anatomic division. Limitations: Imaging of the sciatic nerve division may not equal its true anatomic separation. Conclusion: Refinements in identification and resolution of the anatomic division of the nerve branch point will determine if more accurate localization is of any clinical significance for successful nerve blockade.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Linear Programming (LP) is a powerful decision making tool extensively used in various economic and engineering activities. In the early stages the success of LP was mainly due to the efficiency of the simplex method. After the appearance of Karmarkar's paper, the focus of most research was shifted to the field of interior point methods. The present work is concerned with investigating and efficiently implementing the latest techniques in this field taking sparsity into account. The performance of these implementations on different classes of LP problems is reported here. The preconditional conjugate gradient method is one of the most powerful tools for the solution of the least square problem, present in every iteration of all interior point methods. The effect of using different preconditioners on a range of problems with various condition numbers is presented. Decomposition algorithms has been one of the main fields of research in linear programming over the last few years. After reviewing the latest decomposition techniques, three promising methods were chosen the implemented. Sparsity is again a consideration and suggestions have been included to allow improvements when solving problems with these methods. Finally, experimental results on randomly generated data are reported and compared with an interior point method. The efficient implementation of the decomposition methods considered in this study requires the solution of quadratic subproblems. A review of recent work on algorithms for convex quadratic was performed. The most promising algorithms are discussed and implemented taking sparsity into account. The related performance of these algorithms on randomly generated separable and non-separable problems is also reported.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The accurate and reliable estimation of travel time based on point detector data is needed to support Intelligent Transportation System (ITS) applications. It has been found that the quality of travel time estimation is a function of the method used in the estimation and varies for different traffic conditions. In this study, two hybrid on-line travel time estimation models, and their corresponding off-line methods, were developed to achieve better estimation performance under various traffic conditions, including recurrent congestion and incidents. The first model combines the Mid-Point method, which is a speed-based method, with a traffic flow-based method. The second model integrates two speed-based methods: the Mid-Point method and the Minimum Speed method. In both models, the switch between travel time estimation methods is based on the congestion level and queue status automatically identified by clustering analysis. During incident conditions with rapidly changing queue lengths, shock wave analysis-based refinements are applied for on-line estimation to capture the fast queue propagation and recovery. Travel time estimates obtained from existing speed-based methods, traffic flow-based methods, and the models developed were tested using both simulation and real-world data. The results indicate that all tested methods performed at an acceptable level during periods of low congestion. However, their performances vary with an increase in congestion. Comparisons with other estimation methods also show that the developed hybrid models perform well in all cases. Further comparisons between the on-line and off-line travel time estimation methods reveal that off-line methods perform significantly better only during fast-changing congested conditions, such as during incidents. The impacts of major influential factors on the performance of travel time estimation, including data preprocessing procedures, detector errors, detector spacing, frequency of travel time updates to traveler information devices, travel time link length, and posted travel time range, were investigated in this study. The results show that these factors have more significant impacts on the estimation accuracy and reliability under congested conditions than during uncongested conditions. For the incident conditions, the estimation quality improves with the use of a short rolling period for data smoothing, more accurate detector data, and frequent travel time updates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O objetivo do presente estudo foi avaliar a prevalência de ingestão inadequada de nutrientes em um grupo de adolescentes de São Bernardo do Campo-SP. Dados de consumo de energia e nutrientes foram obtidos por meio de recordatórios de 24 horas aplicados em 89 adolescentes. A prevalência de inadequação foi calculada utilizando o método EAR como ponto de corte, após ajuste pela variabilidade intrapessoal, utilizando o procedimento desenvolvido pela Iowa State University. As Referências de Ingestão Dietética (IDR) foram os valores de referência para ingestão. Para os nutrientes que não possuem EAR estabelecida, a distribuição do consumo foi comparada com a AI. As maiores prevalências de inadequação em ambos sexos foram observadas para o magnésio (99,3 por cento para o sexo masculino e 81,8 por cento para o feminino), zinco (44,0 por cento para o sexo masculino e 23,5 por cento para o feminino), vitamina C (57,2 por cento para o sexo masculino e 59,9 por cento para o feminino) e folato (34,8 por cento para o sexo feminino). A proporção de indivíduos com ingestão superior à AI foi insignificante (menor que 2,0 por cento) em ambos os sexos

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a new approach, predictor-corrector modified barrier approach (PCMBA), to minimize the active losses in power system planning studies. In the PCMBA, the inequality constraints are transformed into equalities by introducing positive auxiliary variables. which are perturbed by the barrier parameter, and treated by the modified barrier method. The first-order necessary conditions of the Lagrangian function are solved by predictor-corrector Newton`s method. The perturbation of the auxiliary variables results in an expansion of the feasible set of the original problem, reaching the limits of the inequality constraints. The feasibility of the proposed approach is demonstrated using various IEEE test systems and a realistic power system of 2256-bus corresponding to the Brazilian South-Southeastern interconnected system. The results show that the utilization of the predictor-corrector method with the pure modified barrier approach accelerates the convergence of the problem in terms of the number of iterations and computational time. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The paper presents a theory for modeling flow in anisotropic, viscous rock. This theory has originally been developed for the simulation of large deformation processes including the folding and kinking of multi-layered visco-elastic rock (Muhlhaus et al. [1,2]). The orientation of slip planes in the context of crystallographic slip is determined by the normal vector - the director - of these surfaces. The model is applied to simulate anisotropic mantle convection. We compare the evolution of flow patterns, Nusselt number and director orientations for isotropic and anisotropic rheologies. In the simulations we utilize two different finite element methodologies: The Lagrangian Integration Point Method Moresi et al [8] and an Eulerian formulation, which we implemented into the finite element based pde solver Fastflo (www.cmis.csiro.au/Fastflo/). The reason for utilizing two different finite element codes was firstly to study the influence of an anisotropic power law rheology which currently is not implemented into the Lagrangian Integration point scheme [8] and secondly to study the numerical performance of Eulerian (Fastflo)- and Lagrangian integration schemes [8]. It turned out that whereas in the Lagrangian method the Nusselt number vs time plot reached only a quasi steady state where the Nusselt number oscillates around a steady state value the Eulerian scheme reaches exact steady states and produces a high degree of alignment (director orientation locally orthogonal to velocity vector almost everywhere in the computational domain). In the simulations emergent anisotropy was strongest in terms of modulus contrast in the up and down-welling plumes. Mechanisms for anisotropic material behavior in the mantle dynamics context are discussed by Christensen [3]. The dominant mineral phases in the mantle generally do not exhibit strong elastic anisotropy but they still may be oriented by the convective flow. Thus viscous anisotropy (the main focus of this paper) may or may not correlate with elastic or seismic anisotropy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We illustrate the flow behaviour of fluids with isotropic and anisotropic microstructure (internal length, layering with bending stiffness) by means of numerical simulations of silo discharge and flow alignment in simple shear. The Cosserat theory is used to provide an internal length in the constitutive model through bending stiffness to describe isotropic microstructure and this theory is coupled to a director theory to add specific orientation of grains to describe anisotropic microstructure. The numerical solution is based on an implicit form of the Material Point Method developed by Moresi et al. [1].

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Questions: A multiple plot design was developed for permanent vegetation plots. How reliable are the different methods used in this design and which changes can we measure? Location: Alpine meadows (2430 m a.s.l.) in the Swiss Alps. Methods: Four inventories were obtained from 40 m(2) plots: four subplots (0.4 m(2)) with a list of species, two 10m transects with the point method (50 points on each), one subplot (4 m2) with a list of species and visual cover estimates as a percentage and the complete plot (40 m(2)) with a list of species and visual estimates in classes. This design was tested by five to seven experienced botanists in three plots. Results: Whatever the sampling size, only 45-63% of the species were seen by all the observers. However, the majority of the overlooked species had cover < 0.1%. Pairs of observers overlooked 10-20% less species than single observers. The point method was the best method for cover estimate, but it took much longer than visual cover estimates, and 100 points allowed for the monitoring of only a very limited number of species. The visual estimate as a percentage was more precise than classes. Working in pairs did not improve the estimates, but one botanist repeating the survey is more reliable than a succession of different observers. Conclusion: Lists of species are insufficient for monitoring. It is necessary to add cover estimates to allow for subsequent interpretations in spite of the overlooked species. The choice of the method depends on the available resources: the point method is time consuming but gives precise data for a limited number of species, while visual estimates are quick but allow for recording only large changes in cover. Constant pairs of observers improve the reliability of the records.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To further validate the doubly labeled water method for measurement of CO2 production and energy expenditure in humans, we compared it with near-continuous respiratory gas exchange in nine healthy young adult males. Subjects were housed in a respiratory chamber for 4 days. Each received 2H2(18)O at either a low (n = 6) or a moderate (n = 3) isotope dose. Low and moderate doses produced initial 2H enrichments of 5 and 10 X 10(-3) atom percent excess, respectively, and initial 18O enrichments of 2 and 2.5 X 10(-2) atom percent excess, respectively. Total body water was calculated from isotope dilution in saliva collected at 4 and 5 h after the dose. CO2 production was calculated by the two-point method using the isotopic enrichments of urines collected just before each subject entered and left the chamber. Isotope enrichments relative to predose samples were measured by isotope ratio mass spectrometry. At low isotope dose, doubly labeled water overestimated average daily energy expenditure by 8 +/- 9% (SD) (range -7 to 22%). At moderate dose the difference was reduced to +4 +/- 5% (range 0-9%). The isotope elimination curves for 2H and 18O from serial urines collected from one of the subjects showed expected diurnal variations but were otherwise quite smooth. The overestimate may be due to approximations in the corrections for isotope fractionation and isotope dilution. An alternative approach to the corrections is presented that reduces the overestimate to 1%.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Työssä tutkittiin sulfonoitujen polystyreenidivinyylibentseenirunkoisten geeli-, meso- ja makrohuokoistenioninvaihtohartsien rakennetta käyttäen useita eri karakterisointimenetelmiä. Lisäksi työssä tutkittiin hartsien huokoskoon vaikutusta aminohappojen kromatografisessa erotuksessa. Työn pääpaino oli hartsien huokoskoon ja huokoisuuden määrittämisessä. Sen selvittämiseksi käytettiin hyväksi elektronimikroskopiaa, typpiadsorptiomittauksia, sekä käänteistä kokoekskluusiokromatografiaa. Parhaat tulokset saatiin käänteisellä kokoekskluusiokromatografialla, joka perustuu erikokoisten dekstraanipolymeerien käyttöön mallimolekyyleinä. Menetelmä sopii meso- ja makrohuokoisuuden tutkimiseen, mutta sen heikkoutena on erittäin pitkä mittausaika. Menetelmä antaa myös huokoskokojakauman, mutta yhden hartsin mittaaminen voi kestää viikon. Menetelmää muutettiin siten, että käytettiin määritettävää huokoskokoaluetta kuvaavien kahden dekstraanipolymeerin seosta. Kromatografiset ajo-olosuhteet optimoitiin sellaisiksi, että injektoidussa seoksessa olevien dekstraanien vastehuiput erottuivat toisistaan. Tällöin voitiin luotettavasti määrittää tutkittavan stationaarifaasin suhteellinen huokoisuus. Tätä työssä kehitettyä nopeaa käänteiseen kokoekskluusiokromatografiaan perustuvaa menetelmää kutsutaan kaksipistemenetelmäksi. Hartsien sulfonihapporyhmien määrää ja jakautumista tutkittiin määrittämällä hartsien kationinvaihtokapasiteetti sekä tutkimalla hartsin pintaa konfokaali-Raman-spektroskopian avulla. Sulfonihapporyhmien ioninvaihtokyvyn selvittämiseksi mitattiin K+-muotoon muutetusta hartsista S/K-suhde poikkileikkauspinnasta. Tulosten perusteella hartsit olivat tasaisesti sulfonoituneet ja 95 % rikkiatomeista oli toimivassa ioninvaihtoryhmässä. Aminohappojen erotuksessa malliaineina oli lysiini, seriini ja tryptofaani. Hartsi oli NH4+-muodossa ja petitilavuus oli 91 mL. Eluenttina käytettiin vettä, jonka pH oli 10. Paras tulos saatiin virtausnopeudella 0,1 mL/min, jolla kaikki kolme aminohappoa erottuivat toisistaan Finex Oy:n mesohuokoisella KEF78-hartsilla. Muilla tutkituilla hartseilla kaikki kolme aminohappoa eivät missään ajo-olosuhteissa erottuneet täysin.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A theory for the description of turbulent boundary layer flows over surfaces with a sudden change in roughness is considered. The theory resorts to the concept of displacement in origin to specify a wall function boundary condition for a kappa-epsilon model. An approximate algebraic expression for the displacement in origin is obtained from the experimental data by using the chart method of Perry and Joubert(J.F.M., vol. 17, pp. 193-122, 1963). This expression is subsequently included in the near wall logarithmic velocity profile, which is then adopted as a boundary condition for a kappa-epsilon modelling of the external flow. The results are compared with the lower atmospheric observations made by Bradley(Q. J. Roy. Meteo. Soc., vol. 94, pp. 361-379, 1968) as well as with velocity profiles extracted from a set of wind tunnel experiments carried out by Avelino et al.( 7th ENCIT, 1998). The measurements are found to be in good agreement with the theoretical computations. The skin-friction coefficient was calculated according to the chart method of Perry and Joubert(J.F.M., vol. 17, pp. 193-122, 1963) and to a balance of the integral momentum equation. In particular, the growth of the internal boundary layer thickness obtained from the numerical simulation is compared with predictions of the experimental data calculated by two methods, the "knee" point method and the "merge" point method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Subshifts are sets of configurations over an infinite grid defined by a set of forbidden patterns. In this thesis, we study two-dimensional subshifts offinite type (2D SFTs), where the underlying grid is Z2 and the set of for-bidden patterns is finite. We are mainly interested in the interplay between the computational power of 2D SFTs and their geometry, examined through the concept of expansive subdynamics. 2D SFTs with expansive directions form an interesting and natural class of subshifts that lie between dimensions 1 and 2. An SFT that has only one non-expansive direction is called extremely expansive. We prove that in many aspects, extremely expansive 2D SFTs display the totality of behaviours of general 2D SFTs. For example, we construct an aperiodic extremely expansive 2D SFT and we prove that the emptiness problem is undecidable even when restricted to the class of extremely expansive 2D SFTs. We also prove that every Medvedev class contains an extremely expansive 2D SFT and we provide a characterization of the sets of directions that can be the set of non-expansive directions of a 2D SFT. Finally, we prove that for every computable sequence of 2D SFTs with an expansive direction, there exists a universal object that simulates all of the elements of the sequence. We use the so called hierarchical, self-simulating or fixed-point method for constructing 2D SFTs which has been previously used by Ga´cs, Durand, Romashchenko and Shen.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ausgangspunkt der Dissertation ist ein von V. Maz'ya entwickeltes Verfahren, eine gegebene Funktion f : Rn ! R durch eine Linearkombination fh radialer glatter exponentiell fallender Basisfunktionen zu approximieren, die im Gegensatz zu den Splines lediglich eine näherungsweise Zerlegung der Eins bilden und somit ein für h ! 0 nicht konvergentes Verfahren definieren. Dieses Verfahren wurde unter dem Namen Approximate Approximations bekannt. Es zeigt sich jedoch, dass diese fehlende Konvergenz für die Praxis nicht relevant ist, da der Fehler zwischen f und der Approximation fh über gewisse Parameter unterhalb der Maschinengenauigkeit heutiger Rechner eingestellt werden kann. Darüber hinaus besitzt das Verfahren große Vorteile bei der numerischen Lösung von Cauchy-Problemen der Form Lu = f mit einem geeigneten linearen partiellen Differentialoperator L im Rn. Approximiert man die rechte Seite f durch fh, so lassen sich in vielen Fällen explizite Formeln für die entsprechenden approximativen Volumenpotentiale uh angeben, die nur noch eine eindimensionale Integration (z.B. die Errorfunktion) enthalten. Zur numerischen Lösung von Randwertproblemen ist das von Maz'ya entwickelte Verfahren bisher noch nicht genutzt worden, mit Ausnahme heuristischer bzw. experimenteller Betrachtungen zur sogenannten Randpunktmethode. Hier setzt die Dissertation ein. Auf der Grundlage radialer Basisfunktionen wird ein neues Approximationsverfahren entwickelt, welches die Vorzüge der von Maz'ya für Cauchy-Probleme entwickelten Methode auf die numerische Lösung von Randwertproblemen überträgt. Dabei werden stellvertretend das innere Dirichlet-Problem für die Laplace-Gleichung und für die Stokes-Gleichungen im R2 behandelt, wobei für jeden der einzelnen Approximationsschritte Konvergenzuntersuchungen durchgeführt und Fehlerabschätzungen angegeben werden.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Das von Maz'ya eingeführte Approximationsverfahren, die Methode der näherungsweisen Näherungen (Approximate Approximations), kann auch zur numerischen Lösung von Randintegralgleichungen verwendet werden (Randpunktmethode). In diesem Fall hängen die Komponenten der Matrix des resultierenden Gleichungssystems zur Berechnung der Näherung für die Dichte nur von der Position der Randpunkte und der Richtung der äußeren Einheitsnormalen in diesen Punkten ab. Dieses numerisches Verfahren wird am Beispiel des Dirichlet Problems für die Laplace Gleichung und die Stokes Gleichungen in einem beschränkten zweidimensionalem Gebiet untersucht. Die Randpunktmethode umfasst drei Schritte: Im ersten Schritt wird die unbekannte Dichte durch eine Linearkombination von radialen, exponentiell abklingenden Basisfunktionen approximiert. Im zweiten Schritt wird die Integration über den Rand durch die Integration über die Tangenten in Randpunkten ersetzt. Für die auftretende Näherungspotentiale können sogar analytische Ausdrücke gewonnen werden. Im dritten Schritt wird das lineare Gleichungssystem gelöst, und eine Näherung für die unbekannte Dichte und damit auch für die Lösung der Randwertaufgabe konstruiert. Die Konvergenz dieses Verfahrens wird für glatte konvexe Gebiete nachgewiesen.