972 resultados para Classical methods


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a new methodology to analyze aeroelastic stability in a continuous range of flight envelope with varying parameter of velocity and altitude. The focus of the paper is to demonstrate that linear matrix inequalities can be used to evaluate the aeroelastic stability in a region of flight envelope instead of a single point, like classical methods. The proposed methodology can also be used to study if a system remains stable during an arbitrary motion from one point to another in the flight envelope, i.e., when the problem becomes time-variant. The main idea is to represent the system as a polytopic differential inclusion system using rational function approximation to write the model in time domain. The theory is outlined and simulations are carried out on the benchmark AGARD 445.6 wing to demonstrate the method. The classical pk-method is used for comparing results and validating the approach. It is shown that this method is efficient to identify stability regions in the flight envelope. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Analysis of thermohaline properties and currents sampled at an anchor station in the Piacaguera Channel (Santos Estuary) in the austral winter was made in terms of tidal (neap and spring tidal cycles) and non-tidal conditions, with the objective to characterize the stratification, circulation and salt transport due to the fortnightly tidal modulation. Classical methods of observational data analysis of hourly and nearly synoptic observations and analytical simulations of nearly steady-state salinity and longitudinal velocity profiles were used. During the neap tidal cycle the flood (v<0) and ebb (v>0) velocities varied in the range of -0.20 m/s to 0.30 m/s associated with a small salinity variation from surface to bottom (26.4 psu to 30.7 psu). In the spring tidal cycle the velocities increased and varied in the range of -0.40 m/s to 0.45 m/s, but the salinity stratification remained almost unaltered. The steady-state salinity and velocity profiles simulated with an analytical model presented good agreement (Skill near 1.0), in comparison with the observational profiles. During the transitional fortnightly tidal modulation period there was no changes in the channel classification (type 2a - partially mixed and weakly stratified), because the potential energy rate was to low to enhance the halocline erosion. These results, associated with the high water column vertical stability (RiL > 20) and the low estuarine Richardson number (RiE = 1.6), lead to the conclusions: i) the driving mechanism for the estuary circulation and mixing was mainly balanced by the fresh water discharge and the tidal forcing associated with the baroclinic component of the gradient pressure force; ii) there was no changes in the thermohaline and circulation characteristics due to the forthnigtly tidal modulation; and iii) the nearly steady-state of the vertical salinity and velocity profiles were well simulated with a theoretical classical analytical model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many engineering sectors are challenged by multi-objective optimization problems. Even if the idea behind these problems is simple and well established, the implementation of any procedure to solve them is not a trivial task. The use of evolutionary algorithms to find candidate solutions is widespread. Usually they supply a discrete picture of the non-dominated solutions, a Pareto set. Although it is very interesting to know the non-dominated solutions, an additional criterion is needed to select one solution to be deployed. To better support the design process, this paper presents a new method of solving non-linear multi-objective optimization problems by adding a control function that will guide the optimization process over the Pareto set that does not need to be found explicitly. The proposed methodology differs from the classical methods that combine the objective functions in a single scale, and is based on a unique run of non-linear single-objective optimizers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Analysis of thermohaline properties and currents sampled at an anchor station in the Piaçaguera Channel (Santos Estuary) in the austral winter was made in terms of tidal (neap and spring tidal cycles) and non-tidal conditions, with the objective to characterize the stratification, circulation and salt transport due to the fortnightly tidal modulation. Classical methods of observational data analysis of hourly and nearly synoptic observations and analytical simulations of nearly steady-state salinity and longitudinal velocity profiles were used. During the neap tidal cycle the flood (v<0) and ebb (v>0) velocities varied in the range of -0.20 m/s to 0.30 m/s associated with a small salinity variation from surface to bottom (26.4 psu to 30.7 psu). In the spring tidal cycle the velocities increased and varied in the range of -0.40 m/s to 0.45 m/s, but the salinity stratification remained almost unaltered. The steady-state salinity and velocity profiles simulated with an analytical model presented good agreement (Skill near 1.0), in comparison with the observational profiles. During the transitional fortnightly tidal modulation period there was no changes in the channel classification (type 2a - partially mixed and weakly stratified), because the potential energy rate was to low to enhance the halocline erosion. These results, associated with the high water column vertical stability (RiL >20) and the low estuarine Richardson number (RiE=1.6), lead to the conclusions: i) the driving mechanism for the estuary circulation and mixing was mainly balanced by the fresh water discharge and the tidal forcing associated with the baroclinic component of the gradient pressure force; ii) there was no changes in the thermohaline and circulation characteristics due to the forthnigtly tidal modulation; and iii) the nearly steady-state of the vertical salinity and velocity profiles were well simulated with a theoretical classical analytical model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The discovery and development of a new drug are time-consuming, difficult and expensive. This complex process has evolved from classical methods into an integration of modern technologies and innovative strategies addressed to the design of new chemical entities to treat a variety of diseases. The development of new drug candidates is often limited by initial compounds lacking reasonable chemical and biological properties for further lead optimization. Huge libraries of compounds are frequently selected for biological screening using a variety of techniques and standard models to assess potency, affinity and selectivity. In this context, it is very important to study the pharmacokinetic profile of the compounds under investigation. Recent advances have been made in the collection of data and the development of models to assess and predict pharmacokinetic properties (ADME - absorption, distribution, metabolism and excretion) of bioactive compounds in the early stages of drug discovery projects. This paper provides a brief perspective on the evolution of in silico ADME tools, addressing challenges, limitations, and opportunities in medicinal chemistry.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

[EN] In this paper we present a new model for optical flow calculation using a variational formulation which preserves discontinuities of the flow much better than classical methods. We study the Euler-Lagrange equations asociated to the variational problem. In the case of quadratic energy, we show the existence and uniqueness of the corresponding evolution problem. Since our method avoid linearization in the optical flow constraint, it can recover large displacement in the scene. We avoid convergence to irrelevant local minima by embedding our method into a linear scale-space framework and using a focusing strategy from coarse to fine scales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Trabajo realizado por: Garijo, J. C., Hernández León, S.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Die Elektrische Impedanztomographie soll als kostengünstige und nebenwirkungsfreie Tomographiemethode in der medizinischen Diagnostik, z. B. in der Mammographie dienen. Mit der EIT läßt sich Krebsgewebe von gesundem Gewebe unterscheiden, da es eine signifikant erhöhte Leitfähigkeit aufweist. Damit kann die EIT als Ergänzung zu den klassischen Diagnoseverfahren dienen. So ist z.B. bei jungen Frauen mit einem dichteren Fettgewebe die Identifizierung eines Mammakarzinoms mit der Röntgentomographie nicht immer möglich. Ziel dieser Arbeit war es, einen Prototypen für die Impedanztomographie zu entwickeln und mögliche Anwendungen zu testen. Der Tomograph ist in Zusammenarbeit mit Dr. K.H.Georgi gebaut worden. Der Tomograph erlaubt es niederohmige, Wechselströme an Elektroden auf der Körperoberfläche einzuspeisen. Die Potentiale können an diesen Elektroden programmierbar vorgegeben werden. Weitere hochohmige Elektroden dienen zur Potentialmessung. Um den Hautwiderstand zu überbrücken, werden Wechselstromfrequenzen von 20-100 kHz eingesetzt. Mit der Möglichkeit der Messung von Strom und Potential auf unterschiedlichen Elektroden kann man das Problem des nur ungenau bekannten Hautwiderstandes umgehen. Prinzipiell ist es mit dem Mainzer EIT System möglich, 100 Messungen in der Sekunde durchzuführen. Auf der Basis von mit dem Mainzer EIT gewonnenen Daten sollten unterschiedliche Rekonstruktionsalgorithmen getestet und weiterentwickelt werden. In der Vergangenheit sind verschiedene Rekonstruktionsalgorithmen für das mathematisch schlecht gestellte EIT Problem betrachtet worden. Sie beruhen im Wesentlichen auf zwei Strategien: Die Linearisierung und iterative Lösung des Problems und Gebietserkennungsmethoden. Die iterativen Verfahren wurden von mir dahingehend modifiziert, dass Leitfähigkeitserhöhungen und Leitfähigkeitserniedrigungen gleichberechtigt behandelt werden können. Für den modifizierten Algorithmus wurden zwei verschiedene Rekonstruktionsalgorithmen programmiert und mit synthetischen Daten getestet. Zum einen die Rekonstruktion über die approximative Inverse, zum anderen eine Rekonstruktion mit einer Diskretisierung. Speziell für die Rekonstruktion mittels Diskretisierung wurde eine Methode entwickelt, mit der zusätzliche Informationen in der Rekonstruktion berücksichtigt werden können, was zu einer Verbesserung der Rekonstruktion beiträgt. Der Gebietserkennungsalgorithmus kann diese Zusatzinformationen liefern. In der Arbeit wurde ein neueres Verfahren für die Gebietserkennung derart modifiziert, dass eine Rekonstruktion auch für getrennte Strom- und Spannungselektroden möglich wurde. Mit Hilfe von Differenzdaten lassen sich ausgezeichnete Rekonstruktionen erzielen. Für die medizinischen Anwendungen sind aber Absolutmessungen nötig, d.h. ohne Leermessung. Der erwartende Effekt einer Inhomogenität in der Leitfähigkeit ist sehr klein und als Differenz zweier grosser Zahlen sehr schwierig zu bestimmen. Die entwickelten Algorithmen kommen auch gut mit Absolutdaten zurecht.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La tematica dell’abuso del diritto in campo fiscale ha conosciuto, negli ultimi anni, una diffusione particolarmente rilevante. Questo lavoro, dopo una necessaria premessa introduttiva alla problematica, affronta l’abuso del diritto in campo tributario tramite l’analisi degli strumenti classici dell’ermenutica, constatando come si arrivi ad un intreccio tra lo strumento della clausola generale anti-abuso e il principio di divieto d’abuso del diritto sviluppatosi a livello europeo, concretizzazione del più ampio principio dell’effettività del diritto dell’Unione Europea. L’analisi prende a modello, da un lato, la clausola generale anti-abuso tedesca, adottata già nel primo dopoguerra, e le sue diverse modifiche legislative occorse negli anni, e dall’altro, il principio europeo di divieto d’abuso del diritto. L’esame congiunto rivela un cortocircuito interpretativo, posto che il principio europeo espone gli stessi concetti della clausola nazionale tedesca pre riforma, la quale, in seguito, alle sentenze Halifax e Cadbury Schweppes, ha subito un’importante modifica, cosicchè la clausola generale abbisogna ora del princìpio europeo per essere interpretata. La tesi evidenzia, inoltre, come tale circuito sia aggravato anche da tensioni interne alle stesse Istituzioni europee, posto che, nonostante l’esistenza di un principio di elaborazione giurisprudenziale, gli Stati Membri sono stati invitati ad introdurre una clausola generale anti-abuso, la cui formulazione rimanda al principio di divieto d’abuso del diritto elaborato dalla Corte di Giustizia.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The application of thematic maps obtained through the classification of remote images needs the obtained products with an optimal accuracy. The registered images from the airplanes display a very satisfactory spatial resolution, but the classical methods of thematic classification not always give better results than when the registered data from satellite are used. In order to improve these results of classification, in this work, the LIDAR sensor data from first return (Light Detection And Ranging) registered simultaneously with the spectral sensor data from airborne are jointly used. The final results of the thematic classification of the scene object of study have been obtained, quantified and discussed with and without LIDAR data, after applying different methods: Maximum Likehood Classification, Support Vector Machine with four different functions kernel and Isodata clustering algorithm (ML, SVM-L, SVM-P, SVM-RBF, SVM-S, Isodata). The best results are obtained for SVM with Sigmoide kernel. These allow the correlation with others different physical parameters with great interest like Manning hydraulic coefficient, for their incorporation in a GIS and their application in hydraulic modeling.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El objeto del presente artículo es el estudio de singularidades en problemas de Potencial mediante el uso del Método de las Ecuaciones Integrales sobre el contorno del dominio en estudio. Frente a soluciones basadas en la mejora de la discretización, análisis asintótico o introducción de funciones de forma que representen mejor la evolución de la función, una nueva hipótesis es presentada: el término responsable de la singularidad es incluido en la integral sobre el contorno de la función auxiliar. Los resultados obtenidos mejoran los de soluciones anteriores simplificando también el tiempo de cálculo = The subject of this paper is the modelling of singularities in potential problems, using the Boundary Integral Equation Method. As a logical alternative to classical methods (discretization refinement, asymptotic analysis, high order interpolatory functions) a new hypothesis is presented: the singularity responsible term is included in the interpolatory shape function. As shown by several exemples results are splendid and computer time radically shortened.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Las normativas que regulan la seguridad de las presas en España han recogido la necesidad de conocer los desplazamientos y deformaciones de sus estructuras y cimientos. A día de hoy, son muchas las presas en explotación que no cuentan con un sistema de auscultación adecuado para controlar este tipo de variables, ya que la instalación de métodos clásicos de precisión en las mismas podría no ser viable técnicamente y, de serlo, supondría un coste económico importante y una dudosa garantía del proceso de ejecución de la obra civil correspondiente. Con el desarrollo de las nuevas tecnologías, la informática y las telecomunicaciones, han surgido nuevos sistemas de auscultación de desplazamientos. Los sistemas GPS actuales, diseñados para el control de estructuras, guiado de maquinaria, navegación y topografía, estabilidad de taludes, subsidencias, etc. permiten alcanzar precisiones centimétricas. El sistema de control de movimientos basado en la tecnología DGPS (GPS diferencial) combinada con un filtro estadístico con el que se alcanzan sensibilidades de hasta ±1 mm en el sistema, suficientes para una auscultación normal de presas según los requerimientos de la normativa actual. Esta exactitud se adapta a los desplazamientos radiales de las presas, donde son muy comunes valores de amplitudes en coronación de hasta 15 mm en las de gravedad y de hasta 45 mm en el caso de las presas bóveda o arco. La presente investigación tiene por objetivo analizar la viabilidad del sistema DGPS en el control de movimientos de presas de hormigón comparando los diferentes sistemas de auscultación y su correlación con las variables físicas y las vinculadas al propio sistema GPS diferencial. Ante la necesidad de dar respuesta a estas preguntas y de validar e incorporar a la mencionada tecnología en la ingeniería civil en España, se ha llevado a cabo un estudio de caso en La Aceña (Ávila). Esta es una de las pocas presas españolas que se está controlando con dicha tecnología y de forma simultánea con los sistemas clásicos de auscultación y algunos otros de reciente aplicación La presente investigación se ha organizado con idea de dar respuesta a varias preguntas que el explotador de presas se plantea y que no se analizan en el estado del arte de la técnica: cómo hacer la configuración espacial del sistema y cuáles son los puntos necesarios que se deben controlar, qué sistemas de comunicaciones son los más fiables, cuáles son los costes asociados, calibración del software, vida útil y mantenimientos requeridos, así como la posibilidad de telecontrolar los datos. Entre las ventajas del sistema DGPS, podemos señalar su bajo coste de implantación y posibilidad de controlarlo de forma remota, así como la exactitud y carácter absoluto de los datos. Además, está especialmente indicado para presas aisladas o mal comunicadas y para aquellas otras en las que el explotador no tiene referencia alguna de la magnitud de los desplazamientos o deformaciones propias de la presa en toda su historia. Entre los inconvenientes de cualquier sistema apoyado en las nuevas tecnologías, destaca la importancia de las telecomunicaciones ya sea en el nivel local en la propia presao desde su ubicación hasta el centro de control de la explotación. Con la experiencia alcanzada en la gestión de la seguridad de presas y sobre la base de la reciente implantación de los nuevos métodos de auscultación descritos, se ha podido analizar cada una de sus ventajas e inconvenientes. En el capítulo 5, se presenta una tabla de decisión para el explotador que servirá como punto de partida para futuras inversiones. El impacto de esta investigación se ha visto reflejado en la publicación de varios artículos en revistas indexadas y en el debate suscitado entre gestores y profesionales del sector en los congresos nacionales e internacionales en los que se han presentado resultados preliminares. All regulations on the safety of dams in Spain have collected the need to know the displacements and deformations of the structure and its foundation. Today there are many dams holding not have an adequate system of auscultation to control variables such as the installation of classical methods of precision in the same might not be technically feasible, and if so, would cost important economic and guarantee the implementation process of the dubious civil works. With the development of new technologies, computing and telecommunications, new displacements auscultation systems have emerged. Current GPS systems designed to control structures, machine guidance, navigation and topography, slope stability, subsidence, etc, allow to reach centimeter-level accuracies. The motion control system based on DGPS technology applies a statistical filter that sensitivities are achieved in the system to ± 1 mm, sufficient for normal auscultation of dams as required by current regulations. This accuracy is adapted to the radial displacement of dams, which are common values in coronation amplitudes up to 15 mm in gravity dams and up to 45 mm in arch or arc dams. This research aims to analyze the feasibility of DGPS system in controlling movements of concrete dams, comparing the different systems auscultation and its correlation with physical variables and linked to differential GPS system itself. Given the need to answer this question and to validate and incorporate this technology to civil engineering in Spain, has conducted a case study in real time at the dam La Aceña (Ávila). This dam is one of the few Spanish companies, which are controlling with this technology and simultaneously with the classic auscultation systems and some other recent application. This research has been organized with a view to responding to questions that the dam operator arises and in the state of the art technique not discussed: how to make spatial configuration of the system and what are the necessary control points what communication systems are the most reliable, what are the associated costs, calibration software, service life and maintenance requirements, possibility of monitoring, etc. Among the advantages we can point to its low cost of implementation, the possibility of remote, high accuracy and absolute nature of the data. It could also be suitable for those isolated or poorly communicated dams and those in which the operator has no reference to the magnitude of displacements or deformations own prey in its history. The disadvantages of any system based on the new technologies we highlight the importance of telecommunications, either locally or from this dam control center of the farm. With the experience gained in the management of dam safety and based on the recent introduction of new methods of auscultation described, it has been possible to analyze each of their advantages and disadvantages. A decision table for the operator, which will serve as a starting point for future investments is presented. The impact of research, has been reflected in the publication of several articles in refereed journals and discussion among managers and professionals in national and international conferences in which they participated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Ion-pair reversed-phase high performance liquid chromatography (IP RP HPLC) is presented as a new, superior method for the analysis of RNA. IP RP HPLC provides a fast and reliable alternative to classical methods of RNA analysis, including separation of different RNA species, quantification and purification. RNA is stable under the analysis conditions used; degradation of RNA during the analyses was not observed. The versatility of IP RP HPLC for RNA analysis is demonstrated. Components of an RNA ladder, ranging in size from 155 to 1770 nt, were resolved. RNA transcripts of up to 5219 nt were analyzed, their integrity determined and they were quantified and purified. Purification of mRNA from total RNA is described, separating mouse rRNA from poly(A)+ mRNA. IP RP HPLC is also suitable for the separation and purification of DIG-labeled from unlabeled RNA. RNA purified by IP RP HPLC exhibits improved stability.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A new method for computing evolutionary distances between DNA sequences is proposed. Contrasting with classical methods, the underlying model does not assume that sequence base compositions (A, C, G, and T contents) are at equilibrium, thus allowing unequal base compositions among compared sequences. This makes the method more efficient than the usual ones in recovering phylogenetic trees from sequence data when base composition is heterogeneous within the data set, as we show by using both simulated and empirical data. When applied to small-subunit ribosomal RNA sequences from several prokaryotic or eukaryotic organisms, this method provides evidence for an early divergence of the microsporidian Vairimorpha necatrix in the eukaryotic lineage.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A estrutura populacional e o desequilíbrio de ligação são dois processos fundamentais para estudos evolutivos e de mapeamento associativo. Tradicionalmente, ambos têm sido investigados por meio de métodos clássicos comumente utilizados. Tais métodos certamente forneceram grandes avanços no entendimento dos processos evolutivos das espécies. No entanto, em geral, nenhum deles utiliza uma visão genealógica de forma a considerar eventos genéticos ocorridos no passado, dificultando a compreensão dos padrões de variação observados no presente. Uma abordagem que possibilita a investigação retrospectiva com base no atual polimorfismo observado é a teoria da coalescência. Assim, o objetivo deste trabalho foi analisar, com base na teoria da coalescência, a estrutura populacional e o desequilíbrio de ligação de um painel mundial de acessos de sorgo (Sorghum bicolor). Para tanto, análises de mutação, migração com fluxo gênico e recombinação foram realizadas para cinco regiões genômicas relacionadas à altura de plantas e maturidade (Dw1, Dw2, Dw4, Ma1 e Ma3) e sete populações previamente selecionadas. Em geral, elevado fluxo gênico médio (Μ = m/μ = 41,78 − 52,07) foi observado entre as populações considerando cada região genômica e todas elas simultaneamente. Os padrões sugeriram intenso intercâmbio de acessos e história evolutiva específica para cada região genômica, mostrando a importância da análise individual dos locos. A quantidade média de migrantes por geração (Μ) não foi simétrica entre pares recíprocos de populações, de acordo com a análise individual e simultânea das regiões. Isso sugere que a forma pela qual as populações se relacionaram e continuam interagindo evolutivamente não é igual, mostrando que os métodos clássicos utilizados para investigar estrutura populacional podem ser insatisfatórios. Baixas taxas médias de recombinação (ρL = 2Ner = 0,030 − 0,246) foram observadas utilizando o modelo de recombinação constante ao longo da região. Baixas e altas taxas médias de recombinação (ρr = 2Ner = 0,060 − 3,395) foram estimadas utilizando o modelo de recombinação variável ao longo da região. Os métodos tradicional (r2) e via coalescência (E[r2 rhomap]) utilizados para a estimação do desequilíbrio de ligação mostraram resultados próximos para algumas regiões genômicas e populações. No entanto, o r2 sugeriu padrões descontínuos de desequilíbrio em várias ocasiões, dificultando o entendimento e a caracterização de possíveis blocos de associação. O método via coalescência (E[r2 rhomap]) forneceu resultados que pareceram ter sido mais consistentes, podendo ser uma estratégia eventualmente importante para um refinamento dos padrões não-aleatórios de associação. Os resultados aqui encontrados sugerem que o mapeamento genético a partir de um único pool gênico pode ser insuficiente para detectar associações causais importantes para características quantitativas em sorgo.