941 resultados para Characteristic Initial Value Problem
Resumo:
Fatigue is an important problem to be considered if a ferroelectric film is used for non-volatile memory devices. In this phenomena, the remanent polarization and coercive field properties degrades in cycles which increase in hysteresis loops. The reasons have been attributed to different mechanisms such as a large voltage applied on ferroelectric film in every reading process in Ferroelectric Random Access Memory (FeRAM) or memories for digital storage in computer, grain size effects and others. The aim of this work is to investigate the influence of the crystallization kinetics on dielectric and ferroelectric properties of the Pb(Zr0.53Ti0.47)O-3 thin films prepared by an alternative chemical method. Films were crystallized in air on Pt/Ti/SiO2/Si substrates at 700 degrees C for 1 hour, in conventional thermal annealing (CTA), and at 700 degrees C for 1 min and 700 degrees C 5 min, using a rapid thermal annealing (RTA) process. Final films were crack free and presented an average of 750 nm in thickness. Dielectric properties were studied in the frequency range of 100 Hz - 1 MHz. All films showed a dielectric dispersion at low frequency. Ferroelectric properties were measured from hysteresis loops at 10 kHz. The obtained remanent polarization (P-r) and coercive field (E-c) were 3.7 mu C/cm(2) and 71.9 kV/cm respectively for film crystallized by CTA while in films crystallized by RTA these parameters were essentially the same. In the fatigue process, the P, value decreased to 14% from the initial value after 1.3 x 10(9) switching cycles, for film by CTA, while for film crystallized by RTA for 5 min, P, decreased to 47% from initial value after 1.7 x 10(9) switching cycles.
Resumo:
Pós-graduação em Matemática - IBILCE
Resumo:
Pós-graduação em Matemática Universitária - IGCE
Resumo:
ABSTRACT: In this work we are concerned with the existence and uniqueness of T -periodic weak solutions for an initial-boundary value problem associated with nonlinear telegraph equations typein a domain. Our arguments rely on elliptic regularization technics, tools from classical functional analysis as well as basic results from theory of monotone operators.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Abstract Background A large number of probabilistic models used in sequence analysis assign non-zero probability values to most input sequences. To decide when a given probability is sufficient the most common way is bayesian binary classification, where the probability of the model characterizing the sequence family of interest is compared to that of an alternative probability model. We can use as alternative model a null model. This is the scoring technique used by sequence analysis tools such as HMMER, SAM and INFERNAL. The most prevalent null models are position-independent residue distributions that include: the uniform distribution, genomic distribution, family-specific distribution and the target sequence distribution. This paper presents a study to evaluate the impact of the choice of a null model in the final result of classifications. In particular, we are interested in minimizing the number of false predictions in a classification. This is a crucial issue to reduce costs of biological validation. Results For all the tests, the target null model presented the lowest number of false positives, when using random sequences as a test. The study was performed in DNA sequences using GC content as the measure of content bias, but the results should be valid also for protein sequences. To broaden the application of the results, the study was performed using randomly generated sequences. Previous studies were performed on aminoacid sequences, using only one probabilistic model (HMM) and on a specific benchmark, and lack more general conclusions about the performance of null models. Finally, a benchmark test with P. falciparum confirmed these results. Conclusions Of the evaluated models the best suited for classification are the uniform model and the target model. However, the use of the uniform model presents a GC bias that can cause more false positives for candidate sequences with extreme compositional bias, a characteristic not described in previous studies. In these cases the target model is more dependable for biological validation due to its higher specificity.
Resumo:
[EN] The purpose of this paper is to investigate the existence and uniqueness of positive solutions for the following fractional boundary value problem D 0 + α u ( t ) + f ( t , u ( t ) ) = 0 , 0 < t < 1 , u ( 0 ) = u ( 1 ) = u ′ ( 0 ) = 0 , where 2 < α ≤ 3 and D 0 + α is the Riemann-Liouville fractional derivative. Our analysis relies on a fixed-point theorem in partially ordered metric spaces. The autonomous case of this problem was studied in the paper [Zhao et al., Abs. Appl. Anal., to appear], but in Zhao et al. (to appear), the question of uniqueness of the solution is not treated. We also present some examples where we compare our results with the ones obtained in Zhao et al. (to appear). 2010 Mathematics Subject Classification: 34B15
Resumo:
[EN] We establish the existence and uniqueness of a positive and nondecreasing solution to a singular boundary value problem of a class of nonlinear fractional differential equation. Our analysis relies on a fixed point theorem in partially ordered sets.
Resumo:
The thesis studies the economic and financial conditions of Italian households, by using microeconomic data of the Survey on Household Income and Wealth (SHIW) over the period 1998-2006. It develops along two lines of enquiry. First it studies the determinants of households holdings of assets and liabilities and estimates their correlation degree. After a review of the literature, it estimates two non-linear multivariate models on the interactions between assets and liabilities with repeated cross-sections. Second, it analyses households financial difficulties. It defines a quantitative measure of financial distress and tests, by means of non-linear dynamic probit models, whether the probability of experiencing financial difficulties is persistent over time. Chapter 1 provides a critical review of the theoretical and empirical literature on the estimation of assets and liabilities holdings, on their interactions and on households net wealth. The review stresses the fact that a large part of the literature explain households debt holdings as a function, among others, of net wealth, an assumption that runs into possible endogeneity problems. Chapter 2 defines two non-linear multivariate models to study the interactions between assets and liabilities held by Italian households. Estimation refers to a pooling of cross-sections of SHIW. The first model is a bivariate tobit that estimates factors affecting assets and liabilities and their degree of correlation with results coherent with theoretical expectations. To tackle the presence of non normality and heteroskedasticity in the error term, generating non consistent tobit estimators, semi-parametric estimates are provided that confirm the results of the tobit model. The second model is a quadrivariate probit on three different assets (safe, risky and real) and total liabilities; the results show the expected patterns of interdependence suggested by theoretical considerations. Chapter 3 reviews the methodologies for estimating non-linear dynamic panel data models, drawing attention to the problems to be dealt with to obtain consistent estimators. Specific attention is given to the initial condition problem raised by the inclusion of the lagged dependent variable in the set of explanatory variables. The advantage of using dynamic panel data models lies in the fact that they allow to simultaneously account for true state dependence, via the lagged variable, and unobserved heterogeneity via individual effects specification. Chapter 4 applies the models reviewed in Chapter 3 to analyse financial difficulties of Italian households, by using information on net wealth as provided in the panel component of the SHIW. The aim is to test whether households persistently experience financial difficulties over time. A thorough discussion is provided of the alternative approaches proposed by the literature (subjective/qualitative indicators versus quantitative indexes) to identify households in financial distress. Households in financial difficulties are identified as those holding amounts of net wealth lower than the value corresponding to the first quartile of net wealth distribution. Estimation is conducted via four different methods: the pooled probit model, the random effects probit model with exogenous initial conditions, the Heckman model and the recently developed Wooldridge model. Results obtained from all estimators accept the null hypothesis of true state dependence and show that, according with the literature, less sophisticated models, namely the pooled and exogenous models, over-estimate such persistence.
Resumo:
In der vorliegenden Arbeit werden zwei physikalischeFließexperimente an Vliesstoffen untersucht, die dazu dienensollen, unbekannte hydraulische Parameter des Materials, wiez. B. die Diffusivitäts- oder Leitfähigkeitsfunktion, ausMeßdaten zu identifizieren. Die physikalische undmathematische Modellierung dieser Experimente führt auf einCauchy-Dirichlet-Problem mit freiem Rand für die degeneriertparabolische Richardsgleichung in derSättigungsformulierung, das sogenannte direkte Problem. Ausder Kenntnis des freien Randes dieses Problems soll dernichtlineare Diffusivitätskoeffizient derDifferentialgleichung rekonstruiert werden. Für diesesinverse Problem stellen wir einOutput-Least-Squares-Funktional auf und verwenden zu dessenMinimierung iterative Regularisierungsverfahren wie dasLevenberg-Marquardt-Verfahren und die IRGN-Methode basierendauf einer Parametrisierung des Koeffizientenraumes durchquadratische B-Splines. Für das direkte Problem beweisen wirunter anderem Existenz und Eindeutigkeit der Lösung desCauchy-Dirichlet-Problems sowie die Existenz des freienRandes. Anschließend führen wir formal die Ableitung desfreien Randes nach dem Koeffizienten, die wir für dasnumerische Rekonstruktionsverfahren benötigen, auf einlinear degeneriert parabolisches Randwertproblem zurück.Wir erläutern die numerische Umsetzung und Implementierungunseres Rekonstruktionsverfahrens und stellen abschließendRekonstruktionsergebnisse bezüglich synthetischer Daten vor.
Resumo:
En el presente artículo se muestran las ventajas de la programación en paralelo resolviendo numéricamente la ecuación del calor en dos dimensiones a través del método de diferencias finitas explícito centrado en el espacio FTCS. De las conclusiones de este trabajo se pone de manifiesto la importancia de la programación en paralelo para tratar problemas grandes, en los que se requiere un elevado número de cálculos, para los cuales la programación secuencial resulta impracticable por el elevado tiempo de ejecución. En la primera sección se describe brevemente los conceptos básicos de programación en paralelo. Seguidamente se resume el método de diferencias finitas explícito centrado en el espacio FTCS aplicado a la ecuación parabólica del calor. Seguidamente se describe el problema de condiciones de contorno y valores iniciales específico al que se va a aplicar el método de diferencias finitas FTCS, proporcionando pseudocódigos de una implementación secuencial y dos implementaciones en paralelo. Finalmente tras la discusión de los resultados se presentan algunas conclusiones. In this paper the advantages of parallel computing are shown by solving the heat conduction equation in two dimensions with the forward in time central in space (FTCS) finite difference method. Two different levels of parallelization are consider and compared with traditional serial procedures. We show in this work the importance of parallel computing when dealing with large problems that are impractical or impossible to solve them with a serial computing procedure. In the first section a summary of parallel computing approach is presented. Subsequently, the forward in time central in space (FTCS) finite difference method for the heat conduction equation is outline, describing how the heat flow equation is derived in two dimensions and the particularities of the finite difference numerical technique considered. Then, a specific initial boundary value problem is solved by the FTCS finite difference method and serial and parallel pseudo codes are provided. Finally after results are discussed some conclusions are presented.
Resumo:
This paper addresses the problem of optimal constant continuous low-thrust transfer in the context of the restricted two-body problem (R2BP). Using the Pontryagin’s principle, the problem is formulated as a two point boundary value problem (TPBVP) for a Hamiltonian system. Lie transforms obtained through the Deprit method allow us to obtain the canonical mapping of the phase flow as a series in terms of the order of magnitude of the thrust applied. The reachable set of states starting from a given initial condition using optimal control policy is obtained analytically. In addition, a particular optimal transfer can be computed as the solution of a non-linear algebraic equation. Se investiga el uso de series y transformadas de Lie en problemas de optimización de trayectorias de satélites impulsados por motores de bajo empuje
Resumo:
One key issue in the simulation of bare electrodynamic tethers (EDTs) is the accurate and fast computation of the collected current, an ambient dependent operation necessary to determine the Lorentz force for each time step. This paper introduces a novel semianalytical solution that allows researchers to compute the current distribution along the tether efficient and effectively under orbital-motion-limited (OML) and beyond OML conditions, i.e., if tether radius is greater than a certain ambient dependent threshold. The method reduces the original boundary value problem to a couple of nonlinear equations. If certain dimensionless variables are used, the beyond OML effect just makes the tether characteristic length L ∗ larger and it is decoupled from the current determination problem. A validation of the results and a comparison of the performance in terms of the time consumed is provided, with respect to a previous ad hoc solution and a conventional shooting method.
Resumo:
Esta tesis se basa en el estudio de la trayectoria que pasa por dos puntos en el problema de los dos cuerpos, inicialmente desarrollado por Lambert, del que toma su nombre. En el pasado, el Problema de Lambert se ha utilizado para la determinación de órbitas a partir de observaciones astronómicas de los cuerpos celestes. Actualmente, se utiliza continuamente en determinación de órbitas, misiones planetaria e interplanetarias, encuentro espacial e interceptación, o incluso en corrección de orbitas. Dada su gran importancia, se decide investigar especialmente sobre su solución y las aplicaciones en las misiones espaciales actuales. El campo de investigación abierto, es muy amplio, así que, es necesario determinar unos objetivos específicos realistas, en el contexto de ejecución de una Tesis, pero que sirvan para mostrar con suficiente claridad el potencial de los resultados aportados en este trabajo, e incluso poder extenderlos a otros campos de aplicación. Como resultado de este análisis, el objetivo principal de la Tesis se enfoca en el desarrollo de algoritmos para resolver el Problema de Lambert, que puedan ser aplicados de forma muy eficiente en las misiones reales donde aparece. En todos los desarrollos, se ha considerado especialmente la eficiencia del cálculo computacional necesario en comparación con los métodos existentes en la actualidad, destacando la forma de evitar la pérdida de precisión inherente a este tipo de algoritmos y la posibilidad de aplicar cualquier método iterativo que implique el uso de derivadas de cualquier orden. En busca de estos objetivos, se desarrollan varias soluciones para resolver el Problema de Lambert, todas ellas basadas en la resolución de ecuaciones transcendentes, con las cuales, se alcanzan las siguientes aportaciones principales de este trabajo: • Una forma genérica completamente diferente de obtener las diversas ecuaciones para resolver el Problema de Lambert, mediante desarrollo analítico, desde cero, a partir de las ecuaciones elementales conocidas de las cónicas (geométricas y temporal), proporcionando en todas ellas fórmulas para el cálculo de derivadas de cualquier orden. • Proporcionar una visión unificada de las ecuaciones más relevantes existentes, mostrando la equivalencia con variantes de las ecuaciones aquí desarrolladas. • Deducción de una nueva variante de ecuación, el mayor logro de esta Tesis, que destaca en eficiencia sobre todas las demás (tanto en coste como en precisión). • Estudio de la sensibilidad de la solución ante variación de los datos iniciales, y como aplicar los resultados a casos reales de optimización de trayectorias. • También, a partir de los resultados, es posible deducir muchas propiedades utilizadas en la literatura para simplificar el problema, en particular la propiedad de invariancia, que conduce al Problema Transformado Simplificado. ABSTRACT This thesis is based on the study of the two-body, two-point boundary-value problem, initially developed by Lambert, from who it takes its name. Since the past, Lambert's Problem has been used for orbit determination from astronomical observations of celestial bodies. Currently, it is continuously used in orbit determinations, for planetary and interplanetary missions, space rendezvous, and interception, or even in orbit corrections. Given its great importance, it is decided to investigate their solution and applications in the current space missions. The open research field is very wide, it is necessary to determine specific and realistic objectives in the execution context of a Thesis, but that these serve to show clearly enough the potential of the results provided in this work, and even to extended them to other areas of application. As a result of this analysis, the main aim of the thesis focuses on the development of algorithms to solve the Lambert’s Problem which can be applied very efficiently in real missions where it appears. In all these developments, it has been specially considered the efficiency of the required computational calculation compared to currently existing methods, highlighting how to avoid the loss of precision inherent in such algorithms and the possibility to apply any iterative method involving the use of derivatives of any order. Looking to meet these objectives, a number of solutions to solve the Lambert’s Problem are developed, all based on the resolution of transcendental equations, with which the following main contributions of this work are reached: • A completely different generic way to get the various equations to solve the Lambert’s Problem by analytical development, from scratch, from the known elementary conic equations (geometrics and temporal), by providing, in all cases, the calculation of derivatives of any order. • Provide a unified view of most existing relevant equations, showing the equivalence with variants of the equations developed here. • Deduction of a new variant of equation, the goal of this Thesis, which emphasizes efficiency (both computational cost and accuracy) over all other. • Estudio de la sensibilidad de la solución ante la variación de las condiciones iniciales, mostrando cómo aprovechar los resultados a casos reales de optimización de trayectorias. • Study of the sensitivity of the solution to the variation of the initial data, and how to use the results to real cases of trajectories’ optimization. • Additionally, from results, it is possible to deduce many properties used in literature to simplify the problem, in particular the invariance property, which leads to a simplified transformed problem.
Resumo:
This work formulates existence theorems for solutions to two-point boundary value problems on time scales. The methods used include maximum principles, a priori bounds and topological degree theory.