966 resultados para Two-point boundary value problems
Resumo:
Non-pressure compensating drip hose is widely used for irrigation of vegetables and orchards. One limitation is that the lateral line length must be short to maintain uniformity due to head loss and slope. Any procedure to increase the length is appropriate because it represents low initial cost of the irrigation system. The hypothesis of this research is that it is possible to increase the lateral line length combining two points: using a larger spacing between emitters at the beginning of the lateral line and a smaller one after a certain distance; and allowing a higher pressure variation along the lateral line under an acceptable value of distribution uniformity. To evaluate this hypothesis, a nonlinear programming model (NLP) was developed. The input data are: diameter, roughness coefficient, pressure variation, emitter operational pressure, relationship between emitter discharge and pressure. The output data are: line length, discharge and length of the each section with different spacing between drippers, total discharge in the lateral line, multiple outlet adjustment coefficient, head losses, localized head loss, pressure variation, number of emitters, spacing between emitters, discharge in each emitter, and discharge per linear meter. The mathematical model developed was compared with the lateral line length obtained with the algebraic solution generated by the Darcy-Weisbach equation. The NLP model showed the best results since it generated the greater gain in the lateral line length, maintaining the uniformity and the flow variation under acceptable standards. It had also the lower flow variation, so its adoption is feasible and recommended.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Matemática Universitária - IGCE
Resumo:
Foi estudado a viabilidade de aplicação do arranjo coplanar de bobinas nas sondas de perfilagem em poço por indução eletromagnética. Paralelamente foram geradas as respostas do convencional arranjo coaxial, que é o amplamente utilizado nas sondas comerciais, com o propósito de elaborar uma análise comparativa. Através da solução analítica (meios homogêneos) e semi-analítica (meios heterogêneos) foram geradas inicialmente as respostas para modelos mais simples, tais como os do (1) meio homogêneo, isotrópico e ilimitado; (2) uma casca cilíndrica simulando a frente de invasão; (3) duas cascas cilíndricas para simular o efeito annulus; (4) uma interface plana e dois semi-espaços simulando o contato entre duas camadas espessas e (5) uma camada plano-horizontal e dois semi-espaços iguais. Apesar da simplicidade destes modelos, eles permitem uma análise detalhada dos efeitos que alguns parâmetros geoelétricos têm sobre as respostas. Aí então, aplicando ainda as condições de contorno nas fronteiras (Sommerfeld Boundary Value Problem), obtivemos as soluções semi-analíticas que nos permitiram simular as respostas em modelos relativamente mais complexos, tais como (1) zonas de transição gradacional nas frentes de invasão; (2) seqüências de camadas plano-paralelas horizontais e inclinadas; (3) seqüências laminadas que permitem simular meios anisotrópicos e (4) passagem gradacional entre duas camadas espessas. Concluimos que o arranjo coplanar de bobinas pode ser uma ferramenta auxiliar na (1) demarcação das interfaces de camadas espessas; (2) posicionamento dos reservatórios de pequenas espessuras; (3) avaliação de perfis de invasão e (4) localizar variações de condutividade azimutalmente.
Resumo:
Pós-graduação em Educação Escolar - FCLAR
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
This paper deals with the numerical solution of complex fluid dynamics problems using a new bounded high resolution upwind scheme (called SDPUS-C1 henceforth), for convection term discretization. The scheme is based on TVD and CBC stability criteria and is implemented in the context of the finite volume/difference methodologies, either into the CLAWPACK software package for compressible flows or in the Freeflow simulation system for incompressible viscous flows. The performance of the proposed upwind non-oscillatory scheme is demonstrated by solving two-dimensional compressible flow problems, such as shock wave propagation and two-dimensional/axisymmetric incompressible moving free surface flows. The numerical results demonstrate that this new cell-interface reconstruction technique works very well in several practical applications. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
In der vorliegenden Dissertation wurden zwei verschiedene Fragestellungen bearbeitet. Zum einen wurde im Rahmen des Schwerpunktprojektes „Kolloidverfahrenstechnik“ und in Zusammenarbeit mit der Arbeitsgruppe von Prof. Dr. Heike Schuchmann vom KIT in Karlsruhe die Verkapselung von Silika-Nanopartikeln in eine PMMA-Hülle durch Miniemulsionspolymerisation entwickelt und der Aufskalierungsprozess unter Verwendung von Hochdruckhomogenisatoren vorangetrieben. Zum anderen wurden verschiedene fluorierte Nanopartikel durch den Miniemulsionsprozess generiert und ihr Verhalten in Zellen untersucht.rnSilika-Partikel konnten durch Miniemulsionspolymerisation in zwei unterschiedlichen Prozessen erfolgreich verkapselt werden. Bei der ersten Methode wurden zunächst modifizierte Silika-Partikel in einer MMA-Monomerphase dispergiert und anschließend durch den normalen Miniemulsionsprozess Silika-beladene Tröpfchen generiert. Diese konnten zu Komposit-Partikeln polymerisiert werden. Bei der Verkapselung durch den Fission/Fusion Prozess wurden die hydrophobisierten Silika-Partikel durch Fission und Fusion Prozesse in schon vorhandene Monomertröpfchen eingebracht, welche hinterher polymerisiert wurden. Um hydrophiles Silika in einem hydrophoben Monomer zu dispergieren, musste zunächst eine Modifizierung der Silika-Partikel stattfinden. Dies geschah unter anderem über eine chemische Anbindung von 3-Methacryloxypropyltri-methoxysilan an der Oberfläche der Silika-Partikel. Des Weiteren wurden die hydrophilen Silika-Partikel durch Adsorption von CTMA-Cl physikalisch modifiziert. Unter anderem durch die Variation des Verkapselungsprozesses, der Silika-Menge, der Tensidart und –menge und der Comonomere konnten Komposit-Partikel mit unterschiedlichen Morphologien, Größen, und Füllgraden erhalten werden.rnFluorierte Nanopartikel wurden erfolgreich über den Prozess der Miniemulsionspolymerisation synthetisiert. Als Monomere dienten dabei fluorierte Acrylate, fluorierte Methacrylate und fluoriertes Styrol. Es war möglich aus jeder dieser drei Gruppen an Monomeren fluorierte Nanopartikel herzustellen. Für genauere Untersuchungen wurden 2,3,4,5,6-Pentafluorstyrol, 3,3,4,4,5,5,6,6,7,7,8,8,9,9,10,10,10-Heptadecafluorodecyl-methacrylat und 1H,1H,2H,2H-Perfluorodecylacrylat als Monomere ausgewählt. Als Hydrophob zur Unterdrückung der Ostwaldreifung wurde Perfluromethyldecalin eingesetzt. Die stabilsten Miniemulsionen wurden wiederum mit den ionischen Tensid SDS generiert. Mit steigendem Gehalt an SDS gelöst in der kontinuierlichen Phase, wurde eine Verkleinerung der Partikelgröße festgestellt. Neben den Homopolymerpartikeln wurden auch Copolymerpartikel mit Acrylsäure erfolgreich synthetisiert. Zudem wurde noch das Verhalten der fluorierten Partikel in Zellen überprüft. Die fluorierten Partikel wiesen ein nicht toxisches Verhalten vor. Die Adsorption von Proteinen aus Humanem Serum wurde über ITC Messungen untersucht. rnSomit konnte gezeigt werden, dass die Technik der Miniemulsionspolymerisation eine abwechslungsreiche und effektive Methode ist, um Hybridnanopartikel mit verschiedenen Morphologien und oberflächenfunktionalisierte Nanopartikel erfolgreich zu generieren.rn
Resumo:
PURPOSE Therapeutic drug monitoring of patients receiving once daily aminoglycoside therapy can be performed using pharmacokinetic (PK) formulas or Bayesian calculations. While these methods produced comparable results, their performance has never been checked against full PK profiles. We performed a PK study in order to compare both methods and to determine the best time-points to estimate AUC0-24 and peak concentrations (C max). METHODS We obtained full PK profiles in 14 patients receiving a once daily aminoglycoside therapy. PK parameters were calculated with PKSolver using non-compartmental methods. The calculated PK parameters were then compared with parameters estimated using an algorithm based on two serum concentrations (two-point method) or the software TCIWorks (Bayesian method). RESULTS For tobramycin and gentamicin, AUC0-24 and C max could be reliably estimated using a first serum concentration obtained at 1 h and a second one between 8 and 10 h after start of the infusion. The two-point and the Bayesian method produced similar results. For amikacin, AUC0-24 could reliably be estimated by both methods. C max was underestimated by 10-20% by the two-point method and by up to 30% with a large variation by the Bayesian method. CONCLUSIONS The ideal time-points for therapeutic drug monitoring of once daily administered aminoglycosides are 1 h after start of a 30-min infusion for the first time-point and 8-10 h after start of the infusion for the second time-point. Duration of the infusion and accurate registration of the time-points of blood drawing are essential for obtaining precise predictions.
Resumo:
Measurement association and initial orbit determination is a fundamental task when building up a database of space objects. This paper proposes an efficient and robust method to determine the orbit using the available information of two tracklets, i.e. their line-of-sights and their derivatives. The approach works with a boundary-value formulation to represent hypothesized orbital states and uses an optimization scheme to find the best fitting orbits. The method is assessed and compared to an initial-value formulation using a measurement set taken by the Zimmerwald Small Aperture Robotic Telescope of the Astronomical Institute at the University of Bern. False associations of closely spaced objects on similar orbits cannot be completely eliminated due to the short duration of the measurement arcs. However, the presented approach uses the available information optimally and the overall association performance and robustness is very promising. The boundary-value optimization takes only around 2% of computational time when compared to optimization approaches using an initial-value formulation. The full potential of the method in terms of run-time is additionally illustrated by comparing it to other published association methods.
Resumo:
PURPOSE Few studies have used multivariate models to quantify the effect of multiple previous spine surgeries on patient-oriented outcome after spine surgery. This study sought to quantify the effect of prior spine surgery on 12-month postoperative outcomes in patients undergoing surgery for different degenerative disorders of the lumbar spine. METHODS The study included 4940 patients with lumbar degenerative disease documented in the Spine Tango Registry of EUROSPINE, the Spine Society of Europe, from 2004 to 2015. Preoperatively and 12 months postoperatively, patients completed the multidimensional Core Outcome Measures Index (COMI; 0-10 scale). Patients' medical history and surgical details were recorded using the Spine Tango Surgery 2006 and 2011 forms. Multiple linear regression models were used to investigate the relationship between the number of previous surgeries and the 12-month postoperative COMI score, controlling for the baseline COMI score and other potential confounders. RESULTS In the adjusted model including all cases, the 12-month COMI score showed a 0.37-point worse value [95 % confidence intervals (95 % CI) 0.29-0.45; p < 0.001] for each additional prior spine surgery. In the subgroup of patients with lumbar disc herniation, the corresponding effect was 0.52 points (95 % CI 0.27-0.77; p < 0.001) and in lumbar degenerative spondylolisthesis, 0.40 points (95 % CI 0.17-0.64; p = 0.001). CONCLUSIONS We were able to demonstrate a clear "dose-response" effect for previous surgery: the greater the number of prior spine surgeries, the systematically worse the outcome at 12 months' follow-up. The results of this study can be used when considering or consenting a patient for further surgery, to better inform the patient of the likely outcome and to set realistic expectations.
Resumo:
Point-of-decision signs to promote stair use have been found to be effective in various environments. However, these signs have been more consistently successful in public access settings that use escalators, such as shopping centers and transportation stations, compared to worksite settings, which are more likely to contain elevators that are not directly adjacent to the stairs. Therefore, this study tested the effectiveness of two point-of-decision sign prompts to increase stair use in a university worksite setting. Also, this study investigated the importance of the message content of the signs. One sign displayed a general health promotion message, while the other sign presented more specific information. Overall, this project examined whether the presence of the point-of-decision signs increases stair use. In addition, this research determined whether the general or specific sign promotes greater stair use. ^ Inconspicuous observers measured stair use both before the signs were present and while they were posted. The study setting was the University of Texas School of Nursing, and the target population was anyone who entered the building, including employees, students, and visitors. The study was conducted over six weeks and included two weeks of baseline measurement, two weeks with the general sign posted, and two weeks with the specific sign posted. Each sign was displayed on a stand in the decision point area near the stairs and the elevator. Logistic regression was used to analyze the data. ^ After adjustment for covariates, the odds of stair use were significantly greater during the intervention period than the baseline period. Furthermore, the specific sign period showed significantly greater odds of stair use than the general sign period. These results indicate that a point-of-decision sign intervention can be effective at promoting stair use in a university worksite setting and that a sign with a specific health information message may be more effective at promoting stair use than a sign with a general health promotion message. These findings can be considered when planning future worksite and university based stair promotion interventions.^
Resumo:
La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.
Resumo:
García et al. present a class of column generation (CG) algorithms for nonlinear programs. Its main motivation from a theoretical viewpoint is that under some circumstances, finite convergence can be achieved, in much the same way as for the classic simplicial decomposition method; the main practical motivation is that within the class there are certain nonlinear column generation problems that can accelerate the convergence of a solution approach which generates a sequence of feasible points. This algorithm can, for example, accelerate simplicial decomposition schemes by making the subproblems nonlinear. This paper complements the theoretical study on the asymptotic and finite convergence of these methods given in [1] with an experimental study focused on their computational efficiency. Three types of numerical experiments are conducted. The first group of test problems has been designed to study the parameters involved in these methods. The second group has been designed to investigate the role and the computation of the prolongation of the generated columns to the relative boundary. The last one has been designed to carry out a more complete investigation of the difference in computational efficiency between linear and nonlinear column generation approaches. In order to carry out this investigation, we consider two types of test problems: the first one is the nonlinear, capacitated single-commodity network flow problem of which several large-scale instances with varied degrees of nonlinearity and total capacity are constructed and investigated, and the second one is a combined traffic assignment model
Resumo:
In this paper, Adams explicit and implicit formulas are obtained in a simple way and a relationship between them is established, allowing for their joint implementation as predictor-corrector methods. It is shown the purposefulness, from a didactic point of view, of Excel spreadsheets for calculations and for the orderly presentation of results in the application of Adams methods to solving initial value problems in ordinary differential equations.