930 resultados para four-point probe method
Resumo:
We study the preconditioning of symmetric indefinite linear systems of equations that arise in interior point solution of linear optimization problems. The preconditioning method that we study exploits the block structure of the augmented matrix to design a similar block structure preconditioner to improve the spectral properties of the resulting preconditioned matrix so as to improve the convergence rate of the iterative solution of the system. We also propose a two-phase algorithm that takes advantage of the spectral properties of the transformed matrix to solve for the Newton directions in the interior-point method. Numerical experiments have been performed on some LP test problems in the NETLIB suite to demonstrate the potential of the preconditioning method discussed.
Resumo:
In this study, the supercritical antisolvent with enhanced mass transfer method (SASEM) is used to fabricate micro and nanoparticles of biocompatible and biodegradable polymer PLGA (poly DL lactide co glycolic acid). This process may be extended to the encapsulation of drugs in these micro and nanoparticles for controlled release purposes. Conventional supercritical antisolvent (SAS) process involves spraying a solution (organic solvent + dissolved polymer) into supercritical fluid (CO[subscript 2]), which acts as an antisolvent. The high rate of mass transfer between organic solvent and supercritical CO[subscript 2] results in supersaturation of the polymer in the spray droplet and precipitation of the polymer as micro or nanoparticles occurs. In the SASEM method, ultrasonic vibration is used to atomize the solution entering the high pressure with supercritical CO[subscript 2]. At the same time, the ultrasonic vibration generated turbulence in the high pressure vessel, leading to better mass transfer between the organic solvent and the supercritical CO₂. In this study, two organic solvents, acetone and dichloromethane (DCM) were used in the SASEM process. Phase Doppler Particle Analyzer (PDPA) was used to study the ultrasonic atomization of liquid using the ultrasonic probe for the SASEM process. Scanning Electron Microscopy (SEM) was used to study the size and morphology of the polymer particles collected at the end of the process.
Resumo:
As stated in Aitchison (1986), a proper study of relative variation in a compositional data set should be based on logratios, and dealing with logratios excludes dealing with zeros. Nevertheless, it is clear that zero observations might be present in real data sets, either because the corresponding part is completely absent –essential zeros– or because it is below detection limit –rounded zeros. Because the second kind of zeros is usually understood as “a trace too small to measure”, it seems reasonable to replace them by a suitable small value, and this has been the traditional approach. As stated, e.g. by Tauber (1999) and by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000), the principal problem in compositional data analysis is related to rounded zeros. One should be careful to use a replacement strategy that does not seriously distort the general structure of the data. In particular, the covariance structure of the involved parts –and thus the metric properties– should be preserved, as otherwise further analysis on subpopulations could be misleading. Following this point of view, a non-parametric imputation method is introduced in Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2000). This method is analyzed in depth by Martín-Fernández, Barceló-Vidal, and Pawlowsky-Glahn (2003) where it is shown that the theoretical drawbacks of the additive zero replacement method proposed in Aitchison (1986) can be overcome using a new multiplicative approach on the non-zero parts of a composition. The new approach has reasonable properties from a compositional point of view. In particular, it is “natural” in the sense that it recovers the “true” composition if replacement values are identical to the missing values, and it is coherent with the basic operations on the simplex. This coherence implies that the covariance structure of subcompositions with no zeros is preserved. As a generalization of the multiplicative replacement, in the same paper a substitution method for missing values on compositional data sets is introduced
Resumo:
We study four measures of problem instance behavior that might account for the observed differences in interior-point method (IPM) iterations when these methods are used to solve semidefinite programming (SDP) problem instances: (i) an aggregate geometry measure related to the primal and dual feasible regions (aspect ratios) and norms of the optimal solutions, (ii) the (Renegar-) condition measure C(d) of the data instance, (iii) a measure of the near-absence of strict complementarity of the optimal solution, and (iv) the level of degeneracy of the optimal solution. We compute these measures for the SDPLIB suite problem instances and measure the correlation between these measures and IPM iteration counts (solved using the software SDPT3) when the measures have finite values. Our conclusions are roughly as follows: the aggregate geometry measure is highly correlated with IPM iterations (CORR = 0.896), and is a very good predictor of IPM iterations, particularly for problem instances with solutions of small norm and aspect ratio. The condition measure C(d) is also correlated with IPM iterations, but less so than the aggregate geometry measure (CORR = 0.630). The near-absence of strict complementarity is weakly correlated with IPM iterations (CORR = 0.423). The level of degeneracy of the optimal solution is essentially uncorrelated with IPM iterations.
Resumo:
The objective of traffic engineering is to optimize network resource utilization. Although several works have been published about minimizing network resource utilization, few works have focused on LSR (label switched router) label space. This paper proposes an algorithm that takes advantage of the MPLS label stack features in order to reduce the number of labels used in LSPs. Some tunnelling methods and their MPLS implementation drawbacks are also discussed. The described algorithm sets up NHLFE (next hop label forwarding entry) tables in each LSR, creating asymmetric tunnels when possible. Experimental results show that the described algorithm achieves a great reduction factor in the label space. The presented works apply for both types of connections: P2MP (point-to-multipoint) and P2P (point-to-point)
Resumo:
This participatory action research was based on a experience of educational intervention on La Cruz and Bello Oriente (Manrique-Medellin), a marginal zone in the northeastern part of the Commune 3 in Medellin,. Colombia. In this marginal sector, psychosocial problems seem to be associated to limited educational and employment opportunities, domestic violence, illegal armed forces, sexual abuse, social discrimination, and lack of adequate public services, among others. All these are also considered as risk factors for drug dependency. We used a structured interview designed to identify leisure tendencies, use of free time, and tendencies in recreational activities. Data from the interview were triangulated with information collected by observation and in field work and used to build a psycho-pedagogic method based on play and leisure activities. The effects of the use of this educational intervention on the satisfaction of human needs were analyzed in light of the theory of Manfred Max-Neef. Results point out the need for new educational strategies aimed to promote creativity, solidarity, mental, physical and social health, more enthusiasm and motivation and in general, positive attitudes that help prevent drug dependence.
Resumo:
Purpose: there are many studies reporting the benefits of pulmonary rehabilitation, but few of them exhibit the behavior and activities of these services. This article presents the characteristics of services, parts management and training level of team members, in addition to the variables or instruments used to measure the effectiveness and impact in these programs. Method: it was made a cross sectional convenience sample which included seven pulmonary rehabilitation services in four Colombian cities (Bogotá, Medellín, Manizales and Cali), selected by the coverage, for having at least one year of experience and for being formally established and recognized nationwide. The interdisciplinary team of each service answered a survey that was validated through a pilot test and expert consensus. Participation was voluntary. Results: labor onset pulmonary rehabilitation services correspond to an average of a decade, with COPD and asthma pathologies of attention. The programs are characterized by an outpatient treatment with an average duration of eight to twelve weeks, with a frequency of an hour three times a week. Also, the director of the service is regularly a pulmonologist and the coordinator a physiotherapist (57.14%). The posgradual training of these professionals is notable, and they report to have procedural, administrative and communicative skills, but qualify regular there research skills. The physical and technological resources are well tested. 71.42% have done impact studies, but only 28.57% have been published. All have in common training in upper limbs, lower limbs, respiratory muscles, counseling, functional assessment and quality of life. The effectiveness and impact of programs is measured by the walking test, quality of life questionnaires and activities of daily living.
Resumo:
La butirilcolinesterasa humana (BChE; EC 3.1.1.8) es una enzima polimórfica sintetizada en el hígado y en el tejido adiposo, ampliamente distribuida en el organismo y encargada de hidrolizar algunos ésteres de colina como la procaína, ésteres alifáticos como el ácido acetilsalicílico, fármacos como la metilprednisolona, el mivacurium y la succinilcolina y drogas de uso y/o abuso como la heroína y la cocaína. Es codificada por el gen BCHE (OMIM 177400), habiéndose identificado más de 100 variantes, algunas no estudiadas plenamente, además de la forma más frecuente, llamada usual o silvestre. Diferentes polimorfismos del gen BCHE se han relacionado con la síntesis de enzimas con niveles variados de actividad catalítica. Las bases moleculares de algunas de esas variantes genéticas han sido reportadas, entre las que se encuentra las variantes Atípica (A), fluoruro-resistente del tipo 1 y 2 (F-1 y F-2), silente (S), Kalow (K), James (J) y Hammersmith (H). En este estudio, en un grupo de pacientes se aplicó el instrumento validado Lifetime Severity Index for Cocaine Use Disorder (LSI-C) para evaluar la gravedad del consumo de “cocaína” a lo largo de la vida. Además, se determinaron Polimorfismos de Nucleótido Simple (SNPs) en el gen BCHE conocidos como responsables de reacciones adversas en pacientes consumidores de “cocaína” mediante secuenciación del gen y se predijo el efecto delos SNPs sobre la función y la estructura de la proteína, mediante el uso de herramientas bio-informáticas. El instrumento LSI-C ofreció resultados en cuatro dimensiones: consumo a lo largo de la vida, consumo reciente, dependencia psicológica e intento de abandono del consumo. Los estudios de análisis molecular permitieron observar dos SNPs codificantes (cSNPs) no sinónimos en el 27.3% de la muestra, c.293A>G (p.Asp98Gly) y c.1699G>A (p.Ala567Thr), localizados en los exones 2 y 4, que corresponden, desde el punto de vista funcional, a la variante Atípica (A) [dbSNP: rs1799807] y a la variante Kalow (K) [dbSNP: rs1803274] de la enzima BChE, respectivamente. Los estudios de predicción In silico establecieron para el SNP p.Asp98Gly un carácter patogénico, mientras que para el SNP p.Ala567Thr, mostraron un comportamiento neutro. El análisis de los resultados permite proponer la existencia de una relación entre polimorfismos o variantes genéticas responsables de una baja actividad catalítica y/o baja concentración plasmática de la enzima BChE y algunas de las reacciones adversas ocurridas en pacientes consumidores de cocaína.
Resumo:
Introducción: En la práctica neuroquirurgica el uso de tornillos pediculares torácicos ha venido en aumento en el tratamiento de diferentes patologías de la espinales. Desde la descripción original, se confirma la adecuada canalización del trayecto mediante el uso del palpador, sin embargo la validez y seguridad de dicho instrumento es limitada y existe riesgo de complicaciones complejas. En este estudio se comprueba la seguridad y validez del uso del palpador para diagnosticar la integridad del trayecto pedicular torácico. Metodología: Se canalizaron pedículos torácicos en especímenes cadavéricos los cuales de manera aleatoria se clasificaron como normales (íntegros) o anormales (violados). Posteriormente cuatro cirujanos de columna, con diferentes grados de experticia, evaluaron el trayecto pedicular. Se realizaron estudios de concordancia obteniendo coeficiente Kappa, porcentaje total de precisión, sensibilidad, especificidad, VPP y VPN y el área bajo la curva ROC para determinar la precisión de la prueba. Resultados: La precisión y validez en el diagnostico del trayecto pedicular y localización del sitio de violación tienen relación directa con la experiencia y entrenamiento del cirujano, el evaluador con mayor experiencia obtuvo los mejores resultados. El uso del palpador tiene una buena precisión, área bajo la curva ROC 0.86, para el diagnostico de las lesiones pediculares. Discusión: La evaluación precisa del trayecto pedicular, presencia o ausencia de una violación, es dependiente del grado de experiencia del cirujano, adicionalmente la precisión diagnostica de la violación varía según la localización de esta.
Resumo:
In this paper we address the problem of extracting representative point samples from polygonal models. The goal of such a sampling algorithm is to find points that are evenly distributed. We propose star-discrepancy as a measure for sampling quality and propose new sampling methods based on global line distributions. We investigate several line generation algorithms including an efficient hardware-based sampling method. Our method contributes to the area of point-based graphics by extracting points that are more evenly distributed than by sampling with current algorithms
Resumo:
A leitura é a base da apreensão e compreensão de todas as matérias no início da vida escolar. O seu domínio e a sua aprendizagem bem sucedida, definem o êxito de um ser humano, ao longo de toda a sua vida profissional, afetiva e social. Por sua vez, o não domínio da leitura, nomeadamente na descodificação e compreensão de qualquer tipo de código escrito, condiciona toda a existência de um sujeito. Há muita investigação e trabalhos realizados nesta área, não havendo consenso entre os investigadores sobre o método mais eficaz no ensino da leitura, o fónico ou sintético, o global ou analítico, ou o misto. No entanto, antes de escolher o método o docente deve conhecer o seu grupo e as suas características para elaborar o seu próprio método de ensino da leitura, ou seja, deve retirar os traços mais importantes de cada método e aplicá-lo à sua turma, tendo em conta também a predisposição natural dos alunos para aprenderem. Dada a importância da leitura como competência, este estudo visou a compreensão das variáveis que interferem no processo eficaz de ensino da leitura, na fase inicial dessa aprendizagem, através da análise da metodologia e estratégias adotadas pelos professores, assim como a relação pedagógica, a organização/a gestão da sala de aula e o estudo dos contextos familiares. O estudo empírico ocorreu durante seis semanas, em duas escolas do 1.º Ciclo, em duas turmas do 1.º ano, sendo a amostra constituída por quatro alunos por turma, dois do sexo feminino e dois do sexo masculino. Foi solicitado aos professores que escolhessem entre os seus alunos, dois bons leitores e dois menos bons leitores. A recolha de dados fez-se a partir da análise da Ficha de Identificação do Aluno e da Escala de Graffar, preenchidas pelos encarregados de educação, dos registos obtidos nas entrevistas com os professores e alguns alunos, das observações das aulas e na avaliação da leitura de um texto. Esta recolha de dados foi registada em tabelas e em gráficos, que permitiram a análise dos resultados obtidos por cada grupo, nomeadamente no que diz respeito à precisão leitora e à velocidade. Desta análise não registamos grandes diferenças nos contextos familiares (na formação escolar dos pais, na sua profissão, no incentivo à leitura), nem no contexto da sala de aula (método, relação pedagógica, organização das aprendizagens no espaço e no tempo). As pequenas diferenças apontam para a figura do professor e para as dinâmicas e clima de sala de aula por ele criadas a que pode não estar alheio o método de iniciação à leitura adotado, hipóteses para futuras pesquisas.
Resumo:
We develop a new multiwave version of the range test for shape reconstruction in inverse scattering theory. The range test [R. Potthast, et al., A ‘range test’ for determining scatterers with unknown physical properties, Inverse Problems 19(3) (2003) 533–547] has originally been proposed to obtain knowledge about an unknown scatterer when the far field pattern for only one plane wave is given. Here, we extend the method to the case of multiple waves and show that the full shape of the unknown scatterer can be reconstructed. We further will clarify the relation between the range test methods, the potential method [A. Kirsch, R. Kress, On an integral equation of the first kind in inverse acoustic scattering, in: Inverse Problems (Oberwolfach, 1986), Internationale Schriftenreihe zur Numerischen Mathematik, vol. 77, Birkhäuser, Basel, 1986, pp. 93–102] and the singular sources method [R. Potthast, Point sources and multipoles in inverse scattering theory, Habilitation Thesis, Göttingen, 1999]. In particular, we propose a new version of the Kirsch–Kress method using the range test and a new approach to the singular sources method based on the range test and potential method. Numerical examples of reconstructions for all four methods are provided.
Resumo:
It has been generally accepted that the method of moments (MoM) variogram, which has been widely applied in soil science, requires about 100 sites at an appropriate interval apart to describe the variation adequately. This sample size is often larger than can be afforded for soil surveys of agricultural fields or contaminated sites. Furthermore, it might be a much larger sample size than is needed where the scale of variation is large. A possible alternative in such situations is the residual maximum likelihood (REML) variogram because fewer data appear to be required. The REML method is parametric and is considered reliable where there is trend in the data because it is based on generalized increments that filter trend out and only the covariance parameters are estimated. Previous research has suggested that fewer data are needed to compute a reliable variogram using a maximum likelihood approach such as REML, however, the results can vary according to the nature of the spatial variation. There remain issues to examine: how many fewer data can be used, how should the sampling sites be distributed over the site of interest, and how do different degrees of spatial variation affect the data requirements? The soil of four field sites of different size, physiography, parent material and soil type was sampled intensively, and MoM and REML variograms were calculated for clay content. The data were then sub-sampled to give different sample sizes and distributions of sites and the variograms were computed again. The model parameters for the sets of variograms for each site were used for cross-validation. Predictions based on REML variograms were generally more accurate than those from MoM variograms with fewer than 100 sampling sites. A sample size of around 50 sites at an appropriate distance apart, possibly determined from variograms of ancillary data, appears adequate to compute REML variograms for kriging soil properties for precision agriculture and contaminated sites. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
These notes have been issued on a small scale in 1983 and 1987 and on request at other times. This issue follows two items of news. First, WaIter Colquitt and Luther Welsh found the 'missed' Mersenne prime M110503 and advanced the frontier of complete Mp-testing to 139,267. In so doing, they terminated Slowinski's significant string of four consecutive Mersenne primes. Secondly, a team of five established a non-Mersenne number as the largest known prime. This result terminated the 1952-89 reign of Mersenne primes. All the original Mersenne numbers with p < 258 were factorised some time ago. The Sandia Laboratories team of Davis, Holdridge & Simmons with some little assistance from a CRAY machine cracked M211 in 1983 and M251 in 1984. They contributed their results to the 'Cunningham Project', care of Sam Wagstaff. That project is now moving apace thanks to developments in technology, factorisation and primality testing. New levels of computer power and new computer architectures motivated by the open-ended promise of parallelism are now available. Once again, the suppliers may be offering free buildings with the computer. However, the Sandia '84 CRAY-l implementation of the quadratic-sieve method is now outpowered by the number-field sieve technique. This is deployed on either purpose-built hardware or large syndicates, even distributed world-wide, of collaborating standard processors. New factorisation techniques of both special and general applicability have been defined and deployed. The elliptic-curve method finds large factors with helpful properties while the number-field sieve approach is breaking down composites with over one hundred digits. The material is updated on an occasional basis to follow the latest developments in primality-testing large Mp and factorising smaller Mp; all dates derive from the published literature or referenced private communications. Minor corrections, additions and changes merely advance the issue number after the decimal point. The reader is invited to report any errors and omissions that have escaped the proof-reading, to answer the unresolved questions noted and to suggest additional material associated with this subject.
Resumo:
We consider the application of the conjugate gradient method to the solution of large, symmetric indefinite linear systems. Special emphasis is put on the use of constraint preconditioners and a new factorization that can reduce the number of flops required by the preconditioning step. Results concerning the eigenvalues of the preconditioned matrix and its minimum polynomial are given. Numerical experiments validate these conclusions.