966 resultados para Two-point boundary value problems
Resumo:
We aim at understanding the multislip behaviour of metals subject to irreversible deformations at small-scales. By focusing on the simple shear of a constrained single-crystal strip, we show that discrete Dislocation Dynamics (DD) simulations predict a strong latent hardening size effect, with smaller being stronger in the range [1.5 µm, 6 µm] for the strip height. We attempt to represent the DD pseudo-experimental results by developing a flow theory of Strain Gradient Crystal Plasticity (SGCP), involving both energetic and dissipative higher-order terms and, as a main novelty, a strain gradient extension of the conventional latent hardening. In order to discuss the capability of the SGCP theory proposed, we implement it into a Finite Element (FE) code and set its material parameters on the basis of the DD results. The SGCP FE code is specifically developed for the boundary value problem under study so that we can implement a fully implicit (Backward Euler) consistent algorithm. Special emphasis is placed on the discussion of the role of the material length scales involved in the SGCP model, from both the mechanical and numerical points of view.
Resumo:
El artículo aborda el problema del encaje de diversas imágenes de una misma escena capturadas por escáner 3d para generar un único modelo tridimensional. Para ello se utilizaron algoritmos genéticos. ABSTRACT: This work introduces a solution based on genetic algorithms to find the overlapping area between two point cloud captures obtained from a three-dimensional scanner. Considering three translation coordinates and three rotation angles, the genetic algorithm evaluates the matching points in the overlapping area between the two captures given that transformation. Genetic simulated annealing is used to improve the accuracy of the results obtained by the genetic algorithm.
Resumo:
This letter presents a temperature-sensing technique on the basis of the temperature dependency of MOSFET leakage currents. To mitigate the effects of process variation, the ratio of two different leakage current measurements is calculated. Simulations show that this ratio is robust to process spread. The resulting sensor is quite small-0.0016 mm2 including an analog-to-digital conversion-and very energy efficient, consuming less than 640 pJ/conversion. After a two-point calibration, the accuracy in a range of 40°C-110°C is less than 1.5°C , which makes the technique suitable for thermal management applications.
Resumo:
We aim at understanding the multislip behaviour of metals subject to irreversible deformations at small-scales. By focusing on the simple shear of a constrained single-crystal strip, we show that discrete Dislocation Dynamics (DD) simulations predict a strong latent hardening size effect, with smaller being stronger in the range [1.5 µm, 6 µm] for the strip height. We attempt to represent the DD pseudo-experimental results by developing a flow theory of Strain Gradient Crystal Plasticity (SGCP), involving both energetic and dissipative higher-order terms and, as a main novelty, a strain gradient extension of the conventional latent hardening. In order to discuss the capability of the SGCP theory proposed, we implement it into a Finite Element (FE) code and set its material parameters on the basis of the DD results. The SGCP FE code is specifically developed for the boundary value problem under study so that we can implement a fully implicit (Backward Euler) consistent algorithm. Special emphasis is placed on the discussion of the role of the material length scales involved in the SGCP model, from both the mechanical and numerical points of view.
Resumo:
El conjunto eficiente en la Teoría de la Decisión Multicriterio juega un papel fundamental en los procesos de solución ya que es en este conjunto donde el decisor debe hacer su elección más preferida. Sin embargo, la generación de tal conjunto puede ser difícil, especialmente en problemas continuos y/o no lineales. El primer capítulo de esta memoria, es introductorio a la Decisión Multicriterio y en él se exponen aquellos conceptos y herramientas que se van a utilizar en desarrollos posteriores. El segundo capítulo estudia los problemas de Toma de Decisiones en ambiente de certidumbre. La herramienta básica y punto de partida es la función de valor vectorial que refleja imprecisión sobre las preferencias del decisor. Se propone una caracterización del conjunto de valor eficiente y diferentes aproximaciones con sus propiedades de encaje y convergencia. Varios algoritmos interactivos de solución complementan los desarrollos teóricos. El tercer capítulo está dedicado al caso de ambiente de incertidumbre. Tiene un desarrollo parcialmente paralelo al anterior y utiliza la función de utilidad vectorial como herramienta de modelización de preferencias del decisor. A partir de la consideración de las distribuciones simples se introduce la eficiencia en utilidad, su caracterización y aproximaciones, que posteriormente se extienden a los casos de distribuciones discretas y continuas. En el cuarto capítulo se estudia el problema en ambiente difuso, aunque de manera introductoria. Concluimos sugiriendo distintos problemas abiertos.---ABSTRACT---The efficient set of a Multicriteria Decicion-Making Problem plays a fundamental role in the solution process since the Decisión Maker's preferred choice should be in this set. However, the computation of that set may be difficult, specially in continuous and/or nonlinear problems. Chapter one introduces Multicriteria Decision-Making. We review basic concepts and tools for later developments. Chapter two studies Decision-Making problems under certainty. The basic tool is the vector valué function, which represents imprecisión in the DM's preferences. We propose a characterization of the valué efficient set and different approximations with nesting and convergence properties. Several interactive algorithms complement the theoretical results. We devote Chapter three to problems under uncertainty. The development is parallel to the former and uses vector utility functions to model the DM's preferences. We introduce utility efficiency for simple distributions, its characterization and some approximations, which we partially extend to discrete and continuous classes of distributions. Chapter four studies the problem under fuzziness, at an exploratory level. We conclude with several open problems.
Resumo:
One key issue in the simulation of bare electrodynamic tethers (EDTs) is the accurate and fast computation of the collected current, an ambient dependent operation necessary to determine the Lorentz force for each time step. This paper introduces a novel semianalytical solution that allows researchers to compute the current distribution along the tether efficient and effectively under orbital-motion-limited (OML) and beyond OML conditions, i.e., if tether radius is greater than a certain ambient dependent threshold. The method reduces the original boundary value problem to a couple of nonlinear equations. If certain dimensionless variables are used, the beyond OML effect just makes the tether characteristic length L ∗ larger and it is decoupled from the current determination problem. A validation of the results and a comparison of the performance in terms of the time consumed is provided, with respect to a previous ad hoc solution and a conventional shooting method.
Resumo:
When used appropriately, self- and peer-assessment are very effective learning tools. In the present work, instructor formative assessment and feedback, self-assessment (SA), and peer-assessment (PA) have been compared. During the first part of a semester, the students followed a continuous formative assessment. Subsequently, they were divided into two subgroups based on similar performances. One subgroup performed SAs, and the other followedPAduring the last part of the course. The performances of the two groups in solving problems were compared. Results suggest that PA is a more effective learning tool than SA, and both are more effective than instructor formative assessment. However, a survey that was conducted at the end of the experiment showed higher student confidence in instructor assessment than in PA. The students recognized the usefulness of acting as peer assessors, but believed that SA helped them more than PA.
Resumo:
We propose in this work a very simple torsion-free beam element capable of capturing geometrical nonlinearities. The simple formulation is objective and unconditionally con- vergent for geometrically nonlinear models with large displacements, in the traditional sense that guarantees more precise numerical solutions for finer discretizations. The formulation does not employ rotational degrees of freedom, can be applied to two and three-dimensional problems, and it is computationally very efficient.
Resumo:
Echolocating big brown bats (Eptesicus fuscus) broadcast ultrasonic frequency-modulated (FM) biosonar sounds (20–100 kHz frequencies; 10–50 μs periods) and perceive target range from echo delay. Knowing the acuity for delay resolution is essential to understand how bats process echoes because they perceive target shape and texture from the delay separation of multiple reflections. Bats can separately perceive the delays of two concurrent electronically generated echoes arriving as little as 2 μs apart, thus resolving reflecting points as close together as 0.3 mm in range (two-point threshold). This two-point resolution is roughly five times smaller than the shortest periods in the bat’s sounds. Because the bat’s broadcasts are 2,000–4,500 μs long, the echoes themselves overlap and interfere with each other, to merge together into a single sound whose spectrum is shaped by their mutual interference depending on the size of the time separation. To separately perceive the delays of overlapping echoes, the bat has to recover information about their very small delay separation that was transferred into the spectrum when the two echoes interfered with each other, thus explicitly reconstructing the range profile of targets from the echo spectrum. However, the bat’s 2-μs resolution limit is so short that the available spectral cues are extremely limited. Resolution of delay seems overly sharp just for interception of flying insects, which suggests that the bat’s biosonar images are of higher quality to suit a wider variety of orientation tasks, and that biosonar echo processing is correspondingly more sophisticated than has been suspected.
Resumo:
Ngrol genes (NgrolB, NgrolC, NgORF13, and NgORF14) that are similar in sequence to genes in the left transferred DNA (TL-DNA) of Agrobacterium rhizogenes have been found in the genome of untransformed plants of Nicotiana glauca. It has been suggested that a bacterial infection resulted in transformation of Ngrol genes early in the evolution of the genus Nicotiana. Although the corresponding four rol genes in TL-DNA provoked hairy-root syndrome in plants, present-day N. glauca and plants transformed with Ngrol genes did not exhibit this phenotype. Sequenced complementation analysis revealed that the NgrolB gene did not induce adventitious roots because it contained two point mutations. Single-base site-directed mutagenesis at these two positions restored the capacity for root induction to the NgrolB gene. When the NgrolB, with these two base substitutions, was positioned under the control of the cauliflower mosaic virus 35S promoter (P35S), transgenic tobacco plants exhibited morphological abnormalities that were not observed in P35s-RirolB plants. In contrast, the activity of the NgrolC gene may have been conserved after an ancient infection by bacteria. Discussed is the effect of the horizontal gene transfer of the Ngrol genes and mutations in the NgrolB gene on the phenotype of ancient plants during the evolution of N. glauca.
Resumo:
Systemic lupus erythematosus (SLE) is an autoimmune disorder characterized by production of autoantibodies against intracellular antigens including DNA, ribosomal P, Ro (SS-A), La (SS-B), and the spliceosome. Etiology is suspected to involve genetic and environmental factors. Evidence of genetic involvement includes: associations with HLA-DR3, HLA-DR2, Fcγ receptors (FcγR) IIA and IIIA, and hereditary complement component deficiencies, as well as familial aggregation, monozygotic twin concordance >20%, λs > 10, purported linkage at 1q41–42, and inbred mouse strains that consistently develop lupus. We have completed a genome scan in 94 extended multiplex pedigrees by using model-based linkage analysis. Potential [log10 of the odds for linkage (lod) > 2.0] SLE loci have been identified at chromosomes 1q41, 1q23, and 11q14–23 in African-Americans; 14q11, 4p15, 11q25, 2q32, 19q13, 6q26–27, and 12p12–11 in European-Americans; and 1q23, 13q32, 20q13, and 1q31 in all pedigrees combined. An effect for the FcγRIIA candidate polymorphism) at 1q23 (lod = 3.37 in African-Americans) is syntenic with linkage in a murine model of lupus. Sib-pair and multipoint nonparametric analyses also support linkage (P < 0.05) at nine loci detected by using two-point lod score analysis (lod > 2.0). Our results are consistent with the presumed complexity of genetic susceptibility to SLE and illustrate racial origin is likely to influence the specific nature of these genetic effects.
Resumo:
O número de acidentes de trânsito é crescente nas últimas décadas no Brasil. Uma das principais causas de acidentes em rodovias brasileiras é o excesso de velocidade, que contribui para a possibilidade de ocorrência de acidentes. As velocidades praticadas pelos motoristas são também função dos elementos geométricos que compõem a via (raio, rampa, largura da faixa, etc). A consistência de traçado não afeta a expectativa dos motoristas e garante uma operação segura. A maioria dos motoristas consegue perceber as falhas de coordenação, mas tecnicamente, por exemplo, desconhecem a origem das mesmas. Esta pesquisa apresenta como objetivo a análise de consistência de um trecho de uma determinada rodovia do país de múltiplas faixas, com elevado índice de acidentes e alto fluxo de veículos comerciais. Os pontos com maior ocorrência de acidentes foram identificados e realizaram-se medições de velocidade para elaboração de um modelo de previsão de velocidade operacional (V85) do trecho de estudo. De posse deste modelo, procedeu-se à análise de consistência através do método dos critérios de segurança, que identificou 2 seções com problemas de consistência. Por fim, verificou-se se estas seções correspondiam aos locais de maior número de acidentes: a tangente T5 precede uma curva com alto índice de acidentes (km 511+000); o local com maior concentração de acidentes (km 514) foi classificado como RAZOÁVEL.
Resumo:
Em geral, uma embarcação de planeio é projetada para atingir elevados níveis de velocidade. Esse atributo de desempenho está diretamente relacionado ao porte da embarcação e à potência instalada em sua planta propulsiva. Tradicionalmente, durante o projeto de uma embarcação, as análises de desempenho são realizadas através de resultados de embarcações já existentes, retirados de séries sistemáticas ou de embarcações já desenvolvidas pelo estaleiro e/ou projetista. Além disso, a determinação dos atributos de desempenho pode ser feita através de métodos empíricos e/ou estatísticos, onde a embarcação é representada através de seus parâmetros geométricos principais; ou a partir de testes em modelos em escala reduzida ou protótipos. No caso específico de embarcações de planeio, o custo dos testes em escala reduzida é muito elevado em relação ao custo de projeto. Isso faz com que a maioria dos projetistas não opte por ensaios experimentais das novas embarcações em desenvolvimento. Ao longo dos últimos anos, o método de Savitsky foi largamente utilizado para se realizar estimativas de potência instalada de uma embarcação de planeio. Esse método utiliza um conjunto de equações semi-empíricas para determinar os esforços atuantes na embarcação, a partir dos quais é possível determinar a posição de equilíbrio de operação e a força propulsora necessária para navegar em uma dada velocidade. O método de Savitsky é muito utilizado nas fases iniciais de projeto, onde a geometria do casco ainda não foi totalmente definida, pois utiliza apenas as características geométricas principais da embarcação para realização das estimativas de esforços. À medida que se avança nas etapas de projeto, aumenta o detalhamento necessário das estimativas de desempenho. Para a realização, por exemplo, do projeto estrutural é necessária uma estimativa do campo de pressão atuante no fundo do casco, o qual não pode ser determinado pelo método de Savitsky. O método computacional implementado nesta dissertação, tem o objetivo de determinar as características do escoamento e o campo de pressão atuante no casco de uma embarcação de planeio navegando em águas calmas. O escoamento é determinado através de um problema de valor de contorno, no qual a superfície molhada no casco é considerada um corpo esbelto. Devido ao uso da teoria de corpo esbelto o problema pode ser tratado, separadamente, em cada seção, onde as condições de contorno são forçadamente respeitadas através de uma distribuição de vórtices.
Resumo:
Fundamento. Evaluar en población general las fuentes de información, actitudes y predisposición hacia la vacunación contra la gripe pandémica A/H1N1 de 2009. Métodos. Estudio descriptivo de carácter transversal realizado entre el 25 de noviembre y 30 de diciembre de 2009 mediante entrevista personal cara a cara a una muestra aleatoria (826) de adultos residentes en el Departamento de Salud de Elche (España). Resultados. Los encuestados manifestaron que la televisión (57%) y el médico de familia (47,9%) eran su fuente principal de información sobre vacunas. El 82,2% tenía una buena opinión sobre las vacunas, un 30,5% percibía la gripe A/H1N1 como más grave que la estacional, siendo esta percepción creciente entre los de mayor edad y con menos estudios. Un 25,4% de encuestados sentía preocupación por padecerla, sobre todo los de menor nivel educativo. Un 42,1% manifiesta su buena predisposición para vacunarse contra la gripe estacional, disminuyendo hasta un 18,4% la intención hacia la gripe A/H1N1. La predisposición hacia la vacunación crece con la edad y en el caso de la gripe A/H1N1 decrece a mayor nivel educativo. El médico de familia es la fuente de información más determinante para inmunizarse frente a gripe estacional (OR 1,43) y gripe A/H1N1 (OR 2,47). Conclusiones. Existe baja aceptabilidad de la vacuna pandémica y baja percepción de gravedad sobre la gripe A/H1N1. La experiencia previa de vacunación ante gripe estacional predispone hacia la inmunización contra gripe A/H1N1. Aunque los medios de comunicación encabezan la fuente de información más usual durante este episodio, la influencia del médico de familia en la decisión de vacunarse resulta significativa.
Resumo:
The new methods accurately integrate forced and damped oscillators. A family of analytical functions is introduced known as T-functions which are dependent on three parameters. The solution is expressed as a series of T-functions calculating their coefficients by means of recurrences which involve the perturbation function. In the T-functions series method the perturbation parameter is the factor in the local truncation error. Furthermore, this method is zero-stable and convergent. An application of this method is exposed to resolve a physic IVP, modeled by means of forced and damped oscillators. The good behavior and precision of the methods, is evidenced by contrasting the results with other-reputed algorithms implemented in MAPLE.