968 resultados para two-centre atomic orbital close coupling method
Resumo:
Ya en el informe acerca del estado de la tecnología en la excavación profunda y en la construcción de túneles en terreno duro presentado en la 7ª Conferencia en Mecánica de Suelos e Ingeniería de la Cimentación, Peck (1969) introdujo los tres temas a ser tenidos en cuenta para el diseño de túneles en terrenos blandos: o Estabilidad de la cavidad durante la construcción, con particular atención a la estabilidad del frente del túnel; o Evaluación de los movimientos del terreno inducidos por la construcción del túnel y de la incidencia de los trabajos subterráneos a poca profundidad sobre los asentamientos en superficie; o Diseño del sistema de sostenimiento del túnel a instalar para asegurar la estabilidad de la estructura a corto y largo plazo. Esta Tesis se centra en los problemas señalados en el segundo de los puntos, analizando distintas soluciones habitualmente proyectadas para reducir los movimientos inducidos por la excavación de los túneles. El objeto de la Tesis es el análisis de la influencia de distintos diseños de paraguas de micropilotes, pantalla de micropilotes, paraguas de jet grouting y pantallas de jet grouting en los asientos en superficie durante la ejecución de túneles ejecutados a poca profundidad, con objeto de buscar el diseño que optimice los medios empleados para una determinada reducción de asientos. Para ello se establecen unas premisas para los proyectistas con objeto de conocer a priori cuales son los tratamientos más eficientes (de los propuestos en la Tesis) para la reducción de asientos en superficie cuando se ha de proyectar un túnel, de tal manera que pueda tener datos cualitativos y algunos cuantitativos sobre los diseños más óptimos, utilizando para ello un programa de elementos finitos de última generación que permite realizara la simulación tensodeformación del terreno mediante el modelo de suelo con endurecimiento (Hardening Soil Small model), que es una variante elastoplástica del modelo hiperbólico, similar al Hardening Soil Model. Además, este modelo incorpora una relación entre deformación y el modulo de rigidez, simulando el diferente comportamiento del suelo para pequeñas deformaciones (por ejemplo vibraciones con deformaciones por debajo de 10-5 y grandes deformaciones (deformaciones > 10-3). Para la realización de la Tesis se han elegido cinco secciones de túnel, dos correspondiente a secciones tipo de túnel ejecutado con tuneladora y tres secciones ejecutados mediante convencionales (dos correspondientes a secciones que han utilizado el método Belga y una que ha utilizado el NATM). Para conseguir los objetivos marcados, primeramente se ha analizado mediante una correlación entre modelos tridimensionales y bidimensionales el valor de relajación usado en estos últimos, y ver su variación al cambio de parámetros como la sección del túnel, la cobertera, el procedimiento constructivo, longitud de pase (métodos convencionales) o presión del frente (tuneladora) y las características geotécnicas de los materiales donde se ejecuta el túnel. Posteriormente se ha analizado que diseño de pantalla de protección tiene mejor eficacia respecto a la reducción de asientos, variando distintos parámetros de las características de la misma, como son el empotramiento, el tipo de micropilotes o pilote, la influencia del arriostramiento de las pantallas de protección en cabeza, la inclinación de la pantalla, la separación de la pantalla al eje del túnel y la disposición en doble fila de la pantalla de pantalla proyectada. Para finalizar el estudio de la efectividad de pantalla de protección para la reducción de asiento, se estudiará la influencia de la sobrecarga cercanas (simulación de edificios) tiene en la efectividad de la pantalla proyectada (desde el punto de vista de reducción de movimientos en superficie). Con objeto de poder comparar la efectividad de la pantalla de micropilotes respecto a la ejecución de un paraguas de micropilotes se ha analizado distintos diseños de paraguas, comparando el movimiento obtenido con el obtenido para el caso de pantalla de micropilotes, comparando ambos resultados con los medidos en obras ya ejecutadas. En otro apartado se ha realizado una comparación entre tratamientos similar, comparándolos en este caso con un paraguas de jet grouting y pantallas de jet grouting. Los resultados obtenidos se han con valores de asientos medidos en distintas obras ya ejecutadas y cuyas secciones se corresponden a los empleados en los modelos numéricos. Since the report on the state of technology in deep excavation and tunnelling in hard ground presented at the 7th Conference on Soil Mechanics and Foundation Engineering, Peck (1969) introduced the three issues to be taken into account for the design of tunnels in soft ground: o Cavity Stability during construction, with particular attention to the stability of the tunnel face; o Evaluation of ground movements induced by tunnelling and the effect of shallow underground workings on surface settlement; o Design of the tunnel support system to be installed to ensure short and long term stability of the structure. This thesis focuses on the issues identified in the second point, usually analysing different solutions designed to reduce the movements induced by tunnelling. The aim of the thesis is to analyse the influence of different micropile forepole umbrellas, micropile walls, jet grouting umbrellas and jet grouting wall designs on surface settlements during near surface tunnelling in order to use the most optimal technique to achieve a determined reduction in settlement. This will establish some criteria for designers to know a priori which methods are most effective (of those proposed in the thesis) to reduce surface settlements in tunnel design, so that it is possible to have qualitative and some quantitative data on the optimal designs, using the latest finite element modelling software that allows simulation of the ground’s infinitesimal strain behaviour using the Hardening Soil Small Model, which is a variation on the elasto-plastic hyperbolic model, similar to Hardening Soil model. In addition, this model incorporates a relationship between strain and the rigidity modulus, simulating different soil behaviour for small deformations (eg deformation vibrations below 10-5 and large deformations (deformations > 10-3). For the purpose of this thesis five tunnel sections have been chosen, two sections corresponding to TBM tunnels and three sections undertaken by conventional means (two sections corresponding to the Belgian method and one corresponding to the NATM). To achieve the objectives outlined, a correlation analysis of the relaxation values used in the 2D and 3D models was undertaken to verify them against parameters such as the tunnel cross-section, the depth of the tunnel, the construction method, the length of step (conventional method) or face pressure (TBM) and the geotechnical characteristics of the ground where the tunnel is constructed. Following this, the diaphragm wall design with the greatest efficiency regarding settlement reduction was analysed, varying parameters such as the toe depth, type of micropiles or piles, the influence of bracing of the head protection diaphragm walls, the inclination of the diaphragm wall, the separation between the diaphragm wall and the tunnel axis and the double diaphragm wall design arrangement. In order to complete the study into the effectiveness of protective diaphragm walls ofn the reduction of settlements, the influence of nearby imposed loads (simulating buildings) on the effectiveness of the designed diaphragm walls (from the point of view of reducing surface movements) will be studied. In order to compare the effectiveness of micropile diaphragm walls regarding the installation of micropile forepole umbrellas, different designs of these forepole umbrellas have been analysed comparing the movement obtained with that obtained for micropiled diaphragm walls, comparing both results with those measured from similar completed projects. In another section, a comparison between similar treatments has been completed, comparing the treatments with a forepole umbrella by jet grouting and jet grouting walls. The results obtained compared with settlement values measured in various projects already completed and whose sections correspond to those used in the numerical models.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
A methodology, fluorescence-intensity distribution analysis, has been developed for confocal microscopy studies in which the fluorescence intensity of a sample with a heterogeneous brightness profile is monitored. An adjustable formula, modeling the spatial brightness distribution, and the technique of generating functions for calculation of theoretical photon count number distributions serve as the two cornerstones of the methodology. The method permits the simultaneous determination of concentrations and specific brightness values of a number of individual fluorescent species in solution. Accordingly, we present an extremely sensitive tool to monitor the interaction of fluorescently labeled molecules or other microparticles with their respective biological counterparts that should find a wide application in life sciences, medicine, and drug discovery. Its potential is demonstrated by studying the hybridization of 5′-(6-carboxytetramethylrhodamine)-labeled and nonlabeled complementary oligonucleotides and the subsequent cleavage of the DNA hybrids by restriction enzymes.
Resumo:
The NMR structure of the rat calreticulin P-domain, comprising residues 189–288, CRT(189–288), shows a hairpin fold that involves the entire polypeptide chain, has the two chain ends in close spatial proximity, and does not fold back on itself. This globally extended structure is stabilized by three antiparallel β-sheets, with the β-strands comprising the residues 189–192 and 276–279, 206–209 and 262–265, and 223–226 and 248–251, respectively. The hairpin loop of residues 227–247 and the two connecting regions between the β-sheets contain a hydrophobic cluster, where each of the three clusters includes two highly conserved tryptophyl residues, one from each strand of the hairpin. The three β-sheets and the three hydrophobic clusters form a repeating pattern of interactions across the hairpin that reflects the periodicity of the amino acid sequence, which consists of three 17-residue repeats followed by three 14-residue repeats. Within the global hairpin fold there are two well-ordered subdomains comprising the residues 219–258, and 189–209 and 262–284, respectively. These are separated by a poorly ordered linker region, so that the relative orientation of the two subdomains cannot be precisely described. The structure type observed for CRT(189–288) provides an additional basis for functional studies of the abundant endoplasmic reticulum chaperone calreticulin.
Resumo:
Mathematical and experimental simulations predict that external fertilization is unsuccessful in habitats characterized by high water motion. A key assumption of such predictions is that gametes are released in hydrodynamic regimes that quickly dilute gametes. We used fucoid seaweeds to examine whether marine organisms in intertidal and subtidal habitats might achieve high levels of fertilization by restricting their release of gametes to calm intervals. Fucus vesiculosus L. (Baltic Sea) released high numbers of gametes only when maximal water velocities were below ca. 0.2 m/s immediately prior to natural periods of release, which occur in early evening in association with lunar cues. Natural fertilization success measured at two sites was always close to 100%. Laboratory experiments confirmed that (i) high water motion inhibits gamete release by F. vesiculosus and by the intertidal fucoids Fucus distichus L. (Maine) and Pelvetia fastigiata (J. Ag.) DeToni (California), and (ii) showed that photosynthesis is required for high gamete release. These data suggest that chemical changes in the boundary layer surrounding adults during photosynthesis and/or mechanosensitive channels may modulate gamete release in response to changing hydrodynamic conditions. Therefore, sensitivity to environmental factors can lead to successful external fertilization, even for species living in turbulent habitats.
Resumo:
Molecular modeling has been used to predict that 2,6-disubstituted amidoanthraquinones, and not the 1,4 series, should preferentially interact with and stabilize triple-stranded DNA structures over duplex DNA. This is due to marked differences in the nature of chromophore-base stacking and groove accessibility for the two series. A DNA foot-printing method that monitors the extent of protection from DNase I cleavage on triplex formation has been used to examine the effects of a number of synthetic isomer compounds in the 1,4 and 2,6 series. The experimental results are in accord with the predicted behavior and confirm that the 1,4 series bind preferentially to double- rather than triple-stranded DNA, whereas the isomeric 2,6 derivatives markedly favor binding to triplex DNA.
Resumo:
O empacotamento irregular de fita é um grupo de problemas na área de corte e empacotamento, cuja aplicação é observada nas indústrias têxtil, moveleira e construção naval. O problema consiste em definir uma configuração de itens irregulares de modo que o comprimento do contêiner retangular que contém o leiaute seja minimizado. A solução deve ser válida, isto é, não deve haver sobreposição entre os itens, que não devem extrapolar as paredes do contêiner. Devido a aspectos práticos, são admitidas até quatro orientações para o item. O volume de material desperdiçado está diretamente relacionado à qualidade do leiaute obtido e, por este motivo, uma solução eficiente pressupõe uma vantagem econômica e resulta em um menor impacto ambiental. O objetivo deste trabalho consiste na geração automática de leiautes de modo a obter níveis de compactação e tempo de processamento compatíveis com outras soluções na literatura. A fim de atingir este objetivo, são realizadas duas propostas de solução. A primeira consiste no posicionamento sequencial dos itens de modo a maximizar a ocorrência de posições de encaixe, que estão relacionadas à restrição de movimento de um item no leiaute. Em linhas gerais, várias sequências de posicionamentos são exploradas com o objetivo de encontrar a solução mais compacta. Na segunda abordagem, que consiste na principal proposta deste trabalho, métodos rasterizados são aplicados para movimentar itens de acordo com uma grade de posicionamento, admitindo sobreposição. O método é baseado na estratégia de minimização de sobreposição, cujo objetivo é a eliminação da sobreposição em um contêiner fechado. Ambos os algoritmos foram testados utilizando o mesmo conjunto de problemas de referência da literatura. Foi verificado que a primeira estratégia não foi capaz de obter soluções satisfatórias, apesar de fornecer informações importantes sobre as propriedades das posições de encaixe. Por outro lado, a segunda abordagem obteve resultados competitivos. O desempenho do algoritmo também foi compatível com outras soluções, inclusive em casos nos quais o volume de dados era alto. Ademais, como trabalho futuro, o algoritmo pode ser estendido de modo a possibilitar a entrada de itens de geometria genérica, o que pode se tornar o grande diferencial da proposta.
Resumo:
This dissertation investigates the question: has financial speculation contributed to global food price volatility since the mid 2000s? I problematize the mainstream academic literature on the 2008-2011 food price spikes as being dominated by neoclassical economic perspectives and offer new conceptual and empirical insights into the relationship between financial speculation and food. Presented in three journal style manuscripts, manuscript one uses circuits of capital to conceptualize the link between financial speculators in the global north and populations in the global south. Manuscript two argues that what makes commodity index speculation (aka ‘index funds’ or index swaps) novel is that it provides institutional investors with what Clapp (2014) calls “financial distance” from the biopolitical implications of food speculation. Finally, manuscript three combines Gramsci’s concepts of hegemony and ‘the intellectual’ with the concept of performativity to investigate the ideological role that public intellectuals and the rhetorical actor the market play in the proliferation and governance of commodity index speculation. The first two manuscripts take an empirically mixed method approach by combining regression analysis with discourse analysis, while the third relies on interview data and discourse analysis. The findings show that financial speculation by index swap dealers and hedge funds did indeed significantly contribute to the price volatility of food commodities between June 2006 and December 2014. The results from the interview data affirm these findings. The discourse analysis of the interview data shows that public intellectuals and rhetorical characters such as ‘the market’ play powerful roles in shaping how food speculation is promoted, regulated and normalized. The significance of the findings is three-fold. First, the empirical findings show that a link does exist between financial speculation and food price volatility. Second, the findings indicate that the post-2008 CFTC and the Dodd-Frank reforms are unlikely to reduce financial speculation or the price volatility that it causes. Third, the findings suggest that institutional investors (such as pension funds) should think critically about how they use commodity index speculation as a way of generating financial earnings.
Resumo:
The dynamical properties of an extended Hubbard model, which is relevant to quarter-filled layered organic molecular crystals, are analyzed. We have computed the dynamical charge correlation function, spectral density, and optical conductivity using Lanczos diagonalization and large-N techniques. As the ratio of the nearest-neighbor Coulomb repulsion, V, to the hopping integral, t, increases there is a transition from a metallic phase to a charge-ordered phase. Dynamical properties close to the ordering transition are found to differ from the ones expected in a conventional metal. Large-N calculations display an enhancement of spectral weight at low frequencies as the system is driven closer to the charge-ordering transition in agreement with Lanczos calculations. As V is increased the charge correlation function displays a collective mode which, for wave vectors close to (pi,pi), increases in amplitude and softens as the charge-ordering transition is approached. We propose that inelastic x-ray scattering be used to detect this mode. Large-N calculations predict superconductivity with d(xy) symmetry close to the ordering transition. We find that this is consistent with Lanczos diagonalization calculations, on lattices of 20 sites, which find that the binding energy of two holes becomes negative close to the charge-ordering transition.
Resumo:
This paper presents a rectangular array antenna with a suitable signal-processing algorithm that is able to steer the beam in azimuth over a wide frequency band. In the previous approach, which was reported in the literature, an inverse discrete Fourier transform technique was proposed for obtaining the signal weighting coefficients. This approach was demonstrated for large arrays in which the physical parameters of the antenna elements were not considered. In this paper, a modified signal-weighting algorithm that works for arbitrary-size arrays is described. Its validity is demonstrated in examples of moderate-size arrays with real antenna elements. It is shown that in some cases, the original beam-forming algorithm fails, while the new algorithm is able to form the desired radiation pattern over a wide frequency band. The performance of the new algorithm is assessed for two cases when the mutual coupling between array elements is both neglected and taken into account.
Resumo:
We propose a scheme for parametric amplification and phase conjugation of an atomic Bose-Einstein condensate (BEC) via stimulated dissociation of a BEC of molecular dimers consisting of bosonic atoms. This can potentially be realized via coherent Raman transitions or using a magnetic Feshbach resonance. We show that the interaction of a small incoming atomic BEC with a (stationary) molecular BEC can produce two counterpropagating atomic beams - an amplified atomic BEC and its phase-conjugate or "time-reversed" replica. The two beams can possess strong quantum correlation in the relative particle number, with squeezed number-difference fluctuations.
Resumo:
The Gauss-Marquardt-Levenberg (GML) method of computer-based parameter estimation, in common with other gradient-based approaches, suffers from the drawback that it may become trapped in local objective function minima, and thus report optimized parameter values that are not, in fact, optimized at all. This can seriously degrade its utility in the calibration of watershed models where local optima abound. Nevertheless, the method also has advantages, chief among these being its model-run efficiency, and its ability to report useful information on parameter sensitivities and covariances as a by-product of its use. It is also easily adapted to maintain this efficiency in the face of potential numerical problems (that adversely affect all parameter estimation methodologies) caused by parameter insensitivity and/or parameter correlation. The present paper presents two algorithmic enhancements to the GML method that retain its strengths, but which overcome its weaknesses in the face of local optima. Using the first of these methods an intelligent search for better parameter sets is conducted in parameter subspaces of decreasing dimensionality when progress of the parameter estimation process is slowed either by numerical instability incurred through problem ill-posedness, or when a local objective function minimum is encountered. The second methodology minimizes the chance of successive GML parameter estimation runs finding the same objective function minimum by starting successive runs at points that are maximally removed from previous parameter trajectories. As well as enhancing the ability of a GML-based method to find the global objective function minimum, the latter technique can also be used to find the locations of many non-global optima (should they exist) in parameter space. This can provide a useful means of inquiring into the well-posedness of a parameter estimation problem, and for detecting the presence of bimodal parameter and predictive probability distributions. The new methodologies are demonstrated by calibrating a Hydrological Simulation Program-FORTRAN (HSPF) model against a time series of daily flows. Comparison with the SCE-UA method in this calibration context demonstrates a high level of comparative model run efficiency for the new method. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Neste trabalho sobre A Influência das Mulheres Clânicas no Pensamento Profético do Pós-Exílio. Um Estudo de Isaías 57,1-21, propomos apresentar uma pesquisa para demonstrar fundamentalmente quem eram os três grupos de mulheres clânicas, que surgem no Isaías 57,3-9, a saber: hn+n>[o(agoureira),aEßnm. (adultério, significando adúltera - tp,a(nm.) e hnAz(prostituta). E daí desenvolver que influência tiveram na profecia, no período do pós-exílio. Para tal tarefa utilizamos dois métodos: o primeiro, um método diacrônico no qual o texto demonstrou uma visão muito negativa dessas mulheres, já que o pano de fundo onde estaria estabelecido o texto é de forte influência patriarcal. Mas, ao aplicarmos um segundo, o método sincrônico e intertextual, o resultado se mostrou diferente, pois o conjunto de textos onde está incluso, a saber: Isaías 56,1-12; 58,1-14 e 61,1-11, demonstram um programa inclusivo. Assim, no Isaías 56,3-4 - rkªNEh;-!B, (filho do estrangeiro) e syrIêSh; (os eunucos), são admitidos na comunidade; no Isaías 58, 1 bqoß[]y: tybeîl.W (e para casa de Jacó), essa casa representada por um grupo de homens é repreendida por causa do jejum; e no Isaías 61,5-6 ~yrIêz (estranhos) e rkênE ynEåb.W (e filhos de estrangeiro), serão os que alimentarão a comunidade. Devido a isto, surgiu uma hipótese de que uma visão negativa sobre elas não poderia ser aceita dentro de um projeto inclusivo. No entanto a questão deve ser respondida. Partirmos para fazer um mapeamento do modo de vida clânico no Gênesis, um conjunto de textos que fala principalmente da família/clã. Ao estudarmos algumas mães míticas: Eva, Sara, Agar, filhas de Ló e Tamar, e ao compará-las com as de Isaías 57,3-9, muitas das características se mostram semelhantes. Pudemos assim perceber que todas essas mulheres clânicas por possuírem conhecimentos do reino animal e vegetal, exerceram influência na vida e morte das famílias/clãs, assim elas tiveram que serem combatidas pelos grupos de homens ao longo do tempo. Ainda outra característica importante no Pós-Exílio, é a movimentação que as famìlias/clãs realizam, mas, essa ‗saìda é sempre carregada de abundância de fertilidade e resolução de conflito pela solidariedade. Devemos estar na profecia, já que ao cristalizar-se um texto ‗desfavorável contra um grupo de mulheres, na verdade se está denunciando uma violência contra elas.
Resumo:
This work studies the development of polymer membranes for the separation of hydrogen and carbon monoxide from a syngas produced by the partial oxidation of natural gas. The CO product is then used for the large scale manufacture of acetic acid by reaction with methanol. A method of economic evaluation has been developed for the process as a whole and a comparison is made between separation of the H2/CO mixture by a membrane system and the conventional method of cryogenic distillation. Costs are based on bids obtained from suppliers for several different specifications for the purity of the CO fed to the acetic acid reactor. When the purity of the CO is set at that obtained by cryogenic distillation it is shown that the membrane separator offers only a marginal cost advantage. Cost parameters for the membrane separation systems have been defined in terms of effective selectivity and cost permeability. These new parameters, obtained from an analysis of the bids, are then used in a procedure which defines the optimum degree of separation and recovery of carbon monoxide for a minimum cost of manufacture of acetic acid. It is shown that a significant cost reduction is achieved with a membrane separator at the optimum process conditions. A method of "targeting" the properties of new membranes has been developed. This involves defining the properties for new (hypothetical -yet to be developed) membranes such that their use for the hydrogen/carbon monoxide separation will produce a reduced cost of acetic acid manufacture. The use of the targeting method is illustrated in the development of new membranes for the separation of hydrogen and carbon monoxide. The selection of polymeric materials for new membranes is based on molecular design methods which predict the polymer properties from the molecular groups making up the polymer molecule. Two approaches have been used. One method develops the analogy between gas solubility in liquids and that in polymers. The UNIFAC group contribution method is then used to predict gas solubility in liquids. In the second method the polymer Permachor number, developed by Salame, has been correlated with hydrogen and carbon monoxide permeabilities. These correlations are used to predict the permeabilities of gases through polymers. Materials have been tested for hydrogen and carbon monoxide permeabilities and improvements in expected economic performance have been achieved.
Resumo:
Antisense oligonucleotides (AODNs) can selectively inhibit individual gene expression by binding specifically to rnRNA. The over-expression of the epidermal growth factor receptor (EGFR) has been observed in human breast and glioblastoma tumours and therefore AODNs designed to target the EGFR would be a logical approach to treat such tumours. However, poor pharmacokinetic/pharmacodynamic and cellular uptake properties of AODNs have limited their potential to become successful therapeutic agents. Biodegradable polymeric poly (lactide-co-glycolide) (P(LA-GA)) and dendrimer delivery systems may allow us to overcome these problems. The use of combination therapy of AODNs and cytotoxic agents such as 5-fluorouracil (5-FU) in biodegradable polymeric formulations may further improve therapeutic efficacy. AODN and 5-FU were either co-entrapped in a single microsphere formulation or individually entrapped in two separate microsphere formulations (double emulsion method) and release profiles determined in vitro. The release rates (biphasic) of the two agents were significantly slower when co-entrapped as a single microsphere formulation compared to those obtained with the separate formulations. Sustained release over 35 days was observed in both types of formulation. Naked and microsphere-loaded AODN and 5-FU (in separate formulations) were tested on an A431 vulval carcinoma cell line. Combining naked or encapsulated drugs produced a greater reduction in viable cell number as compared with either agent alone. However, controls and Western blotting indicated that non-sequence specific cytotoxic effects were responsible for the differences in viable cell number. The uptake properties of an anionic dendrimer based on a pentaerythritol structure covalently linked to AODNs (targeting the EGFR) have been characterised. The cellular uptake of AODN linked to the dendrimer was up to 3.5-fold higher in A431 cells as compared to naked AODN. Mechanistic studies suggested that receptor-mediated and adsorptive (binding protein-mediated) endocytosis were the predominant uptake mechanisms for the dendrimer-AODN. RNase H cleavage assay suggested that the dendrimer-AODN was able to bind and cleave the target site. A reduction of 20%, 28% and 45% in EGFR expression was observed with 0.05μM, 0.1μM and 0.5μM dendrimer-AODN treatments respectively with a reduction in viable cell number. These results indicated that the dendrimer delivery system may reduce viable cell number by an antisense specific mechanism.