992 resultados para iterative method


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Um algoritmo numérico foi criado para apresentar a solução da conversão termoquímica de um combustível sólido. O mesmo foi criado de forma a ser flexível e dependente do mecanismo de reação a ser representado. Para tanto, um sistema das equações características desse tipo de problema foi resolvido através de um método iterativo unido a matemática simbólica. Em função de não linearidades nas equações e por se tratar de pequenas partículas, será aplicado o método de Newton para reduzir o sistema de equações diferenciais parciais (EDP’s) para um sistema de equações diferenciais ordinárias (EDO’s). Tal processo redução é baseado na união desse método iterativo à diferenciação numérica, pois consegue incorporar nas EDO’s resultantes funções analíticas. O modelo reduzido será solucionado numericamente usando-se a técnica do gradiente bi-conjugado (BCG). Tal modelo promete ter taxa de convergência alta, se utilizando de um número baixo de iterações, além de apresentar alta velocidade na apresentação das soluções do novo sistema linear gerado. Além disso, o algoritmo se mostra independente do tamanho da malha constituidora. Para a validação, a massa normalizada será calculada e comparada com valores experimentais de termogravimetria encontrados na literatura, , e um teste com um mecanismo simplificado de reação será realizado.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work, we consider the numerical solution of a large eigenvalue problem resulting from a finite rank discretization of an integral operator. We are interested in computing a few eigenpairs, with an iterative method, so a matrix representation that allows for fast matrix-vector products is required. Hierarchical matrices are appropriate for this setting, and also provide cheap LU decompositions required in the spectral transformation technique. We illustrate the use of freely available software tools to address the problem, in particular SLEPc for the eigensolvers and HLib for the construction of H-matrices. The numerical tests are performed using an astrophysics application. Results show the benefits of the data-sparse representation compared to standard storage schemes, in terms of computational cost as well as memory requirements.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Trabalho de Dissertação de natureza científica para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização em Estruturas

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The application of compressive sensing (CS) to hyperspectral images is an active area of research over the past few years, both in terms of the hardware and the signal processing algorithms. However, CS algorithms can be computationally very expensive due to the extremely large volumes of data collected by imaging spectrometers, a fact that compromises their use in applications under real-time constraints. This paper proposes four efficient implementations of hyperspectral coded aperture (HYCA) for CS, two of them termed P-HYCA and P-HYCA-FAST and two additional implementations for its constrained version (CHYCA), termed P-CHYCA and P-CHYCA-FAST on commodity graphics processing units (GPUs). HYCA algorithm exploits the high correlation existing among the spectral bands of the hyperspectral data sets and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. The proposed P-HYCA and P-CHYCA implementations have been developed using the compute unified device architecture (CUDA) and the cuFFT library. Moreover, this library has been replaced by a fast iterative method in the P-HYCA-FAST and P-CHYCA-FAST implementations that leads to very significant speedup factors in order to achieve real-time requirements. The proposed algorithms are evaluated not only in terms of reconstruction error for different compressions ratios but also in terms of computational performance using two different GPU architectures by NVIDIA: 1) GeForce GTX 590; and 2) GeForce GTX TITAN. Experiments are conducted using both simulated and real data revealing considerable acceleration factors and obtaining good results in the task of compressing remotely sensed hyperspectral data sets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The theme of this dissertation is the finite element method applied to mechanical structures. A new finite element program is developed that, besides executing different types of structural analysis, also allows the calculation of the derivatives of structural performances using the continuum method of design sensitivities analysis, with the purpose of allowing, in combination with the mathematical programming algorithms found in the commercial software MATLAB, to solve structural optimization problems. The program is called EFFECT – Efficient Finite Element Code. The object-oriented programming paradigm and specifically the C ++ programming language are used for program development. The main objective of this dissertation is to design EFFECT so that it can constitute, in this stage of development, the foundation for a program with analysis capacities similar to other open source finite element programs. In this first stage, 6 elements are implemented for linear analysis: 2-dimensional truss (Truss2D), 3-dimensional truss (Truss3D), 2-dimensional beam (Beam2D), 3-dimensional beam (Beam3D), triangular shell element (Shell3Node) and quadrilateral shell element (Shell4Node). The shell elements combine two distinct elements, one for simulating the membrane behavior and the other to simulate the plate bending behavior. The non-linear analysis capability is also developed, combining the corotational formulation with the Newton-Raphson iterative method, but at this stage is only avaiable to solve problems modeled with Beam2D elements subject to large displacements and rotations, called nonlinear geometric problems. The design sensitivity analysis capability is implemented in two elements, Truss2D and Beam2D, where are included the procedures and the analytic expressions for calculating derivatives of displacements, stress and volume performances with respect to 5 different design variables types. Finally, a set of test examples were created to validate the accuracy and consistency of the result obtained from EFFECT, by comparing them with results published in the literature or obtained with the ANSYS commercial finite element code.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Treball de recerca realitzat per un alumne d'ensenyament secundari i guardonat amb un Premi CIRIT per fomentar l'esperit cientí­fic del Jovent l'any 2009. La programació al servei de la matemàtica és un programa informàtic fet amb Excel i Visual Basic. Resol equacions de primer grau, equacions de segon grau, sistemes d'equacions lineals de dues equacions i dues incògnites, sistemes d'equacions lineals compatibles determinats de tres equacions i tres incògnites i troba zeros de funcions amb el teorema de Bolzano. En cadascun dels casos, representa les solucions gràficament. Per a això, en el treball s'ha hagut de treballar, en matemàtiques, amb equacions, nombres complexos, la regla de Cramer per a la resolució de sistemes, i buscar la manera de programar un mètode iteratiu pel teorema de Bolzano. En la part gràfica, s'ha resolt com fer taules de valors amb dues i tres variables i treballar amb rectes i plans. Per la part informàtica, s'ha emprat un llenguatge nou per l'alumne i, sobretot, ha calgut saber decidir on posar una determinada instrucció, ja que el fet de variar-ne la posició una sola lí­nea ho pot canviar tot. A més d'això, s'han resolt altres problemes de programació i també s'ha realitzat el disseny de pantalles.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El terme paisatge i les seves aplicacions són cada dia més utilitzats per les administracions i altres entitats com a eina de gestió del territori. Aprofitant la gran quantitat de dades en bases compatibles amb SIG (Sistemes d’Informació Geogràfica) existents a Catalunya s’ha desenvolupat una síntesi cartogràfica on s’identifiquen els Paisatges Funcionals (PF) de Catalunya, concepte que fa referència al comportament fisico-ecològic del terreny a partir de variables topogràfiques i climàtiques convenientment transformades i agregades. S’ha utilitzat un mètode semiautomàtic i iteratiu de classificació no supervisada (clustering) que permet la creació d’una llegenda jeràrquica o nivells de generalització. S’ha obtingut com a resultat el Mapa de Paisatges Funcionals de Catalunya (MPFC) amb una llegenda de 26 categories de paisatges i 5 nivells de generalització amb una resolució espacial de 180 m. Paral·lelament, s’han realitzat validacions indirectes sobre el mapa obtingut a partir dels coneixements naturalistes i la cartografia existent, així com també d’un mapa d’incertesa (aplicant lògica difusa) que aporten informació de la fiabilitat de la classificació realitzada. Els Paisatges Funcionals obtinguts permeten relacionar zones de condicions topo-climàtiques homogènies i dividir el territori en zones caracteritzades ambientalment i no políticament amb la intenció que sigui d’utilitat a l’hora de millorar la gestió dels recursos naturals i la planificació d’actuacions humanes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Työn tavoitteena on ollut selvittää runkoelementtitehtaan materiaalien hankinnanorganisointi ja ohjaus nykytilanteessa. Tutkimuksessa on pyritty löytämään materiaaliprosessin kannalta toimintaa rajoittavia pullonkauloja sekä etsitty kehitystoimenpiteitä ongelmakohtiin prosessiajattelun näkökulmasta. Tarkastelun kohteena on ollut yrityksen operatiivinen materiaaliprosessi nimikkeiden tilauksesta varastointiin. Työssä on käytetty kvalitatiivista tutkimusmenetelmää ja empiirisen osuuden tiedot on hankittu haastatteluilla ja laatuohjeistuksesta. Yrityksen nykytilanne on mallinnettu prosessikaavioiden avulla, ja on selvitetty mitkä ovat prosessin tieto- ja materiaalivirrat sekä mitkä ovat tärkeimmät toiminnot materiaaliketjussa. Prosessianalyysin ja haastatteluiden pohjalta määriteltiin kehitysehdotukset prosessin suorituskyvyn tehostamiseksi. Nykytilan kartoituksen jälkeen suurimmat ongelmat materiaaliprosessissa liittyvät tilausten ajoitusten hallintaan, muutoksien vaikutukseen prosessissa sekä vastuiden ja kokonaishallinnan puuttumiseen. Ongelmat johtuvat pääosin rakennusalan projektimaisesta luonteesta. Yhdeksi kehityskohteeksi nousi myös tiedonhallinnan tehostaminen, etenkin prosessin vaiheiden automatisointi tietojärjestelmiä hyödyntäen. Toimintaan on pyritty etsimään ratkaisuja prosessiajattelun avulla, mikä osoittautui sopivaksi menetelmäksi toiminnan kehittämisessä. Tutkimuksen tuloksena syntyi kehitysehdotuksia, joiden pohjalta muodostettiin uusi materiaalien ohjauksen toimintamalli. Toimintamallissa tärkeimpänä on ennakkotiedon hyödyntäminen tilaussuunnittelun tukena. Alustavat materiaalimäärät välitetään ennakkotietona myös toimittajille, jotka voivat paremmin suunnitella omaa tuotantokapasiteettiaan. Tilausten suunnittelu tapahtuu tarkentuvasti ja lopullinen materiaalimäärä ja tarveajankohta välitetään kotiinkutsun yhteydessä. Toimintamalliin liittyy lisäksi materiaalien vastaanoton ja varastoinnin kehittäminen sekä muutoksien hallinta tietojärjestelmää paremmin hyödyntäen. Kriittisintä materiaaliprosessissa tulee olemaan prosessin tiedonhallinta ja siihen liittyvät vastuukysymykset.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main objective of this research was to study the feasibility of incorporating organosolv semi-chemical triticale fibers as the reinforcing element in recycled high density polyethylene (HDPE). In the first step, triticale fibers were characterized in terms of chemical composition and compared with other biomass species (wheat, rye, softwood, and hardwood). Then, organosolv semi-chemical triticale fibers were prepared by the ethanolamine process. These fibers were characterized in terms of its yield, kappa number, fiber length/diameter ratio, fines, and viscosity; the obtained results were compared with those of eucalypt kraft pulp. In the second step, the prepared fibers were examined as a reinforcing element for recycled HDPE composites. Coupled and non-coupled HDPE composites were prepared and tested for tensile properties. Results showed that with the addition of the coupling agent maleated polyethylene (MAPE), the tensile properties of composites were significantly improved, as compared to non-coupled samples and the plain matrix. Furthermore, the influence of MAPE on the interfacial shear strength (IFSS) was studied. The contributions of both fibers and matrix to the composite strength were also studied. This was possible by the use of a numerical iterative method based on the Bowyer-Bader and Kelly-Tyson equations

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Active microwave imaging is explored as an imaging modality for early detection of breast cancer. When exposed to microwaves, breast tumor exhibits electrical properties that are significantly different from that of healthy breast tissues. The two approaches of active microwave imaging — confocal microwave technique with measured reflected signals and microwave tomographic imaging with measured scattered signals are addressed here. Normal and malignant breast tissue samples of same person are subjected to study within 30 minutes of mastectomy. Corn syrup is used as coupling medium, as its dielectric parameters show good match with that of the normal breast tissue samples. As bandwidth of the transmitter is an important aspect in the time domain confocal microwave imaging approach, wideband bowtie antenna having 2:1 VSWR bandwidth of 46% is designed for the transmission and reception of microwave signals. Same antenna is used for microwave tomographic imaging too at the frequency of 3000 MHz. Experimentally obtained time domain results are substantiated by finite difference time domain (FDTD) analysis. 2-D tomographic images are reconstructed with the collected scattered data using distorted Born iterative method. Variations of dielectric permittivity in breast samples are distinguishable from the obtained permittivity profiles.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Despite its recognized value in detecting and characterizing breast disease, X-ray mammography has important limitations that motivate the quest for alternatives to augment the diagnostic tools that are currently available to the radiologist. The rationale for pursuing electromagnetic methods are based on the significant dielectric contrast between normal and cancerous breast tissues, when exposed to microwaves. The present study analyzes two-dimensional microwave tomographic imaging on normal and malignant breast tissue samples extracted by mastectomy, to assess the suitability of the technique for early detection ofbreast cancer. The tissue samples are immersed in matching coupling medium and are illuminated by 3 GHz signal. 2-D tomographic images ofthe breast tissue samples are reconstructed from the collected scattered data using distorted Born iterative method. Variations of dielectric permittivity in breast samples are distinguishable from the obtained permittivity profiles, which is a clear indication of the presence of malignancy. Hence microwave tomographic imaging is proposed as an alternate imaging modality for early detection ofbreast cancer.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Gauss–Newton algorithm is an iterative method regularly used for solving nonlinear least squares problems. It is particularly well suited to the treatment of very large scale variational data assimilation problems that arise in atmosphere and ocean forecasting. The procedure consists of a sequence of linear least squares approximations to the nonlinear problem, each of which is solved by an “inner” direct or iterative process. In comparison with Newton’s method and its variants, the algorithm is attractive because it does not require the evaluation of second-order derivatives in the Hessian of the objective function. In practice the exact Gauss–Newton method is too expensive to apply operationally in meteorological forecasting, and various approximations are made in order to reduce computational costs and to solve the problems in real time. Here we investigate the effects on the convergence of the Gauss–Newton method of two types of approximation used commonly in data assimilation. First, we examine “truncated” Gauss–Newton methods where the inner linear least squares problem is not solved exactly, and second, we examine “perturbed” Gauss–Newton methods where the true linearized inner problem is approximated by a simplified, or perturbed, linear least squares problem. We give conditions ensuring that the truncated and perturbed Gauss–Newton methods converge and also derive rates of convergence for the iterations. The results are illustrated by a simple numerical example. A practical application to the problem of data assimilation in a typical meteorological system is presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Kinetic studies on the AR (aldose reductase) protein have shown that it does not behave as a classical enzyme in relation to ring aldose sugars. As with non-enzymatic glycation reactions, there is probably a free radical element involved derived from monosaccharide autoxidation. in the case of AR, there is free radical oxidation of NADPH by autoxidizing monosaccharides, which is enhanced in the presence of the NADPH-binding protein. Thus any assay for AR based on the oxidation of NADPH in the presence of autoxidizing monosaccharides is invalid, and tissue AR measurements based on this method are also invalid, and should be reassessed. AR exhibits broad specificity for both hydrophilic and hydrophobic aldehydes that suggests that the protein may be involved in detoxification. The last thing we would want to do is to inhibit it. ARIs (AR inhibitors) have a number of actions in the cell which are not specific, and which do not involve them binding to AR. These include peroxy-radical scavenging and effects of metal ion chelation. The AR/ARI story emphasizes the importance of correct experimental design in all biocatalytic experiments. Developing the use of Bayesian utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has led to the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-m and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimizes the error in the parameters estimated, and is suitable for simple or complex steady-state models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose: Acquiring details of kinetic parameters of enzymes is crucial to biochemical understanding, drug development, and clinical diagnosis in ocular diseases. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. Methods: We have developed Bayesian utility functions to minimise kinetic parameter variance involving differentiation of model expressions and matrix inversion. These have been applied to the simple kinetics of the enzymes in the glyoxalase pathway (of importance in posttranslational modification of proteins in cataract), and the complex kinetics of lens aldehyde dehydrogenase (also of relevance to cataract). Results: Our successful application of Bayesian statistics has allowed us to identify a set of rules for designing optimum kinetic experiments iteratively. Most importantly, the distribution of points in the range is critical; it is not simply a matter of even or multiple increases. At least 60 % must be below the KM (or plural if more than one dissociation constant) and 40% above. This choice halves the variance found using a simple even spread across the range.With both the glyoxalase system and lens aldehyde dehydrogenase we have significantly improved the variance of kinetic parameter estimation while reducing the number and costs of experiments. Conclusions: We have developed an optimal and iterative method for selecting features of design such as substrate range, number of measurements and choice of intermediate points. Our novel approach minimises parameter error and costs, and maximises experimental efficiency. It is applicable to many areas of ocular drug design, including receptor-ligand binding and immunoglobulin binding, and should be an important tool in ocular drug discovery.