980 resultados para Uniformly Convex
Resumo:
On several classes of n-person NTU games that have at least one Shapley NTU value, Aumann characterized this solution by six axioms: Non-emptiness, efficiency, unanimity, scale covariance, conditional additivity, and independence of irrelevant alternatives (IIA). Each of the first five axioms is logically independent of the remaining axioms, and the logical independence of IIA is an open problem. We show that for n = 2 the first five axioms already characterize the Shapley NTU value, provided that the class of games is not further restricted. Moreover, we present an example of a solution that satisfies the first five axioms and violates IIA for two-person NTU games (N, V) with uniformly p-smooth V(N).
Resumo:
142 p.
Resumo:
ENGLISH: The growth of yellowfin tuna in the eastern Pacific is described in terms of several measurements taken from the fish and their otoliths (sagittae). Equations are also developed to predict age from the readily available dimensions of fork length and head length. The data for all of these relationships were obtained from a sample of 196 fish collected during 1977 through 1979 from purse seiners fishing north of the equator and east of 137°W. The fork-length range of the sample was 30-170 cm. The number of increments on a sagitta of each fish was used as a direct estimate of its age in days. The correspondence between increments and days has been validated for yellowfin in the length range of 40-110 cm. Circumstantial evidence indicates that the relationship also applies in the intervals of 0-40 cm and 110-170 cm. This circumstancial evidence was derived from: 1) literature on validated increments during early growth for other species, 2) knowledge that structures assumed to be daily increments on yellowfin otoliths have subsequently been validated in the corresponding zone on bluefin otoliths, and 3) a comparison of the growth curve based on increments to others obtained from length frequency modal analysis. Based on this information the age estimates over the entire size range of sampled fish are believed to be accurate. In addition to the general growth and age-predictive relationships, the major conclusions of the study are that: 1) Sexually dimorphic growth exists in terms of fork length, fish weight and the length of the otolith counting path for the entire data set. Examination of the data for 1977 and 1979 also revealed that the fork-length growth of each sex differed within years. 2) For combined sexes there were significant differences among the fork-length growth curves for yellowfin sampled in different years. 3) Yellowfin caught inshore (within 275 miles of the coast) were heavier than those caught offshore for fork lengths between 30 and 110 cm. The situation was reversed for lengths greater than 110 cm. 4) Back-calculated spawning months were distributed uniformly throughout the year in 1974 and 1977, but in 1975-1976 and 1978 spawning activity was apparently concentrated in the latter half of the year. SPANISH: El crecimiento del atún aleta amarilla en el Pacífico oriental se describe en términos de varias medidas obtenidas de peces y otolitos (sagita). Se formularon también ecuaciones para pronosticar la edad, según las dimensiones fácilmente disponibles de la longitud horquilla y longitud de la cabeza. Los datos de todas estas relaciones fueron obtenidos mediante una muestra de 196 peces recolectados desde 1977hasta 1979, en barcos cerqueros que estaban pescando al norte de la línea ecuatorial y al este de los 137°W. El intervalo de la longitud horquilla de la muestra fue de 30-170 cm. Se empleó el número de incrementos en la sagita de cada pez como un estimado directo de la edad en días. Se ha comprobado la relación entre los incrementos y los días en el intervalo de longitud de 40-110 cm del aleta amarilla. La evidencia circunstancial indica que se aplica también la relación a los intervalos de 0-40 cm y 110-170 cm. Esta evidencia circunstancial se dedujo: 1) de las publicaciones sobre incrementos comprobados de otras especies durante el primer crecimiento, 2) del conocimientoque las estructuras que se supone son incrementos diarios en los otolitos del aleta amarilla han sido comprobadas luego en la parte correspondiente de otolitos del aleta azul y 3) por una comparación de la curva de crecimiento, basada en incrementos relacionados a otras curvas obtenidas según el análisis modal frecuencia-talla. Se cree, basados en esta información, que las estimaciones de la edad sobre toda la amplitud de talla de los peces muestreados, es acertada. Además de la relación del crecimiento general y del pronóstico de la edad, las principales conclusiones de este estudio son: 1) En toda la serie de datos existe el crecimiento sexualmente dimórfico en términos de longitud horquilla, peso del pez y longitud del plano de conteo del otolito. El examen de los datos de 1977 y 1979, revelan también que el crecimiento longitud horquilla de cada sexo es diferente en los años. 2) En los sexos combinados hubo diferencias significativas entre las curvas de crecimiento longitud horquilla del aleta amarilla muestreado en diferentes años. 3) El aleta amarilla capturado cerca a la costa (en las primeras 275 millas) fue más pesado que el capturado en las aguas mar afuera, correspondiente a la longitud horquilla entre 30 y 110 cm. La situación fue inversa para tallas de más de 110 cm. 4) En 1974 y 1977, los meses retrocalculados del desove se distribuyeron uniformemente durante el año, pero en 1975-1976 y 1978, la actividad del desove se concentró aparentemente en el último semestre del año. (PDF contains 62 pages.)
Resumo:
A process of laser cladding Ni-CF-C-CaF2 mixed powders to form a multifunctional composite coatingd on gamma-TiAl substrate was carried out. The microstructure of the coating was examined using XRD, SEM and EDS. The coating has a unique microstructure consisting of primary dendrite or short-stick TiC and block Al4C3 carbides reinforcement as well as fine isolated spherical CaF2 solid lubrication particles uniformly dispersed in the NiCrAlTi (gamma) matrix. The average microhardness of the composite coatings is approximately HV 650 and it is 2-factor greater than that of the TiAl substrate. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Six topics in incompressible, inviscid fluid flow involving vortex motion are presented. The stability of the unsteady flow field due to the vortex filament expanding under the influence of an axial compression is examined in the first chapter as a possible model of the vortex bursting observed in aircraft contrails. The filament with a stagnant core is found to be unstable to axisymmetric disturbances. For initial disturbances with the form of axisymmetric Kelvin waves, the filament with a uniformly rotating core is neutrally stable, but the compression causes the disturbance to undergo a rapid increase in amplitude. The time at which the increase occurs is, however, later than the observed bursting times, indicating the bursting phenomenon is not caused by this type of instability.
In the second and third chapters the stability of a steady vortex filament deformed by two-dimensional strain and shear flows, respectively, is examined. The steady deformations are in the plane of the vortex cross-section. Disturbances which deform the filament centerline into a wave which does not propagate along the filament are shown to be unstable and a method is described to calculate the wave number and corresponding growth rate of the amplified waves for a general distribution of vorticity in the vortex core.
In Chapter Four exact solutions are constructed for two-dimensional potential flow over a wing with a free ideal vortex standing over the wing. The loci of positions of the free vortex are found and the lift is calculated. It is found that the lift on the wing can be significantly increased by the free vortex.
The two-dimensional trajectories of an ideal vortex pair near an orifice are calculated in Chapter Five. Three geometries are examined, and the criteria for the vortices to travel away from the orifice are determined.
Finally, Chapter Six reproduces completely the paper, "Structure of a linear array of hollow vortices of finite cross-section," co-authored with G. R. Baker and P. G. Saffman. Free streamline theory is employed to construct an exact steady solution for a linear array of hollow, or stagnant cored vortices. If each vortex has area A and the separation is L, then there are two possible shapes if A^(1/2)/L is less than 0.38 and none if it is larger. The stability of the shapes to two-dimensional, periodic and symmetric disturbances is considered for hollow vortices. The more deformed of the two possible shapes is found to be unstable, while the less deformed shape is stable.
Resumo:
Some problems of edge waves and standing waves on beaches are examined.
The nonlinear interaction of a wave normally incident on a sloping beach with a subharmonic edge wave is studied. A two-timing expansion is used in the full nonlinear theory to obtain the modulation equations which describe the evolution of the waves. It is shown how large amplitude edge waves are produced; and the results of the theory are compared with some recent laboratory experiments.
Traveling edge waves are considered in two situations. First, the full linear theory is examined to find the finite depth effect on the edge waves produced by a moving pressure disturbance. In the second situation, a Stokes' expansion is used to discuss the nonlinear effects in shallow water edge waves traveling over a bottom of arbitrary shape. The results are compared with the ones of the full theory for a uniformly sloping bottom.
The finite amplitude effects for waves incident on a sloping beach, with perfect reflection, are considered. A Stokes' expansion is used in the full nonlinear theory to find the corrections to the dispersion relation for the cases of normal and oblique incidence.
Finally, an abstract formulation of the linear water waves problem is given in terms of a self adjoint but nonlocal operator. The appropriate spectral representations are developed for two particular cases.
Resumo:
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.
Resumo:
With data centers being the supporting infrastructure for a wide range of IT services, their efficiency has become a big concern to operators, as well as to society, for both economic and environmental reasons. The goal of this thesis is to design energy-efficient algorithms that reduce energy cost while minimizing compromise to service. We focus on the algorithmic challenges at different levels of energy optimization across the data center stack. The algorithmic challenge at the device level is to improve the energy efficiency of a single computational device via techniques such as job scheduling and speed scaling. We analyze the common speed scaling algorithms in both the worst-case model and stochastic model to answer some fundamental issues in the design of speed scaling algorithms. The algorithmic challenge at the local data center level is to dynamically allocate resources (e.g., servers) and to dispatch the workload in a data center. We develop an online algorithm to make a data center more power-proportional by dynamically adapting the number of active servers. The algorithmic challenge at the global data center level is to dispatch the workload across multiple data centers, considering the geographical diversity of electricity price, availability of renewable energy, and network propagation delay. We propose algorithms to jointly optimize routing and provisioning in an online manner. Motivated by the above online decision problems, we move on to study a general class of online problem named "smoothed online convex optimization", which seeks to minimize the sum of a sequence of convex functions when "smooth" solutions are preferred. This model allows us to bridge different research communities and help us get a more fundamental understanding of general online decision problems.
Resumo:
Home to hundreds of millions of souls and land of excessiveness, the Himalaya is also the locus of a unique seismicity whose scope and peculiarities still remain to this day somewhat mysterious. Having claimed the lives of kings, or turned ancient timeworn cities into heaps of rubbles and ruins, earthquakes eerily inhabit Nepalese folk tales with the fatalistic message that nothing lasts forever. From a scientific point of view as much as from a human perspective, solving the mysteries of Himalayan seismicity thus represents a challenge of prime importance. Documenting geodetic strain across the Nepal Himalaya with various GPS and leveling data, we show that unlike other subduction zones that exhibit a heterogeneous and patchy coupling pattern along strike, the last hundred kilometers of the Main Himalayan Thrust fault, or MHT, appear to be uniformly locked, devoid of any of the “creeping barriers” that traditionally ward off the propagation of large events. The approximately 20 mm/yr of reckoned convergence across the Himalaya matching previously established estimates of the secular deformation at the front of the arc, the slip accumulated at depth has to somehow elastically propagate all the way to the surface at some point. And yet, neither large events from the past nor currently recorded microseismicity nearly compensate for the massive moment deficit that quietly builds up under the giant mountains. Along with this large unbalanced moment deficit, the uncommonly homogeneous coupling pattern on the MHT raises the question of whether or not the locked portion of the MHT can rupture all at once in a giant earthquake. Univocally answering this question appears contingent on the still elusive estimate of the magnitude of the largest possible earthquake in the Himalaya, and requires tight constraints on local fault properties. What makes the Himalaya enigmatic also makes it the potential source of an incredible wealth of information, and we exploit some of the oddities of Himalayan seismicity in an effort to improve the understanding of earthquake physics and cipher out the properties of the MHT. Thanks to the Himalaya, the Indo-Gangetic plain is deluged each year under a tremendous amount of water during the annual summer monsoon that collects and bears down on the Indian plate enough to pull it away from the Eurasian plate slightly, temporarily relieving a small portion of the stress mounting on the MHT. As the rainwater evaporates in the dry winter season, the plate rebounds and tension is increased back on the fault. Interestingly, the mild waggle of stress induced by the monsoon rains is about the same size as that from solid-Earth tides which gently tug at the planets solid layers, but whereas changes in earthquake frequency correspond with the annually occurring monsoon, there is no such correlation with Earth tides, which oscillate back-and-forth twice a day. We therefore investigate the general response of the creeping and seismogenic parts of MHT to periodic stresses in order to link these observations to physical parameters. First, the response of the creeping part of the MHT is analyzed with a simple spring-and-slider system bearing rate-strengthening rheology, and we show that at the transition with the locked zone, where the friction becomes near velocity neutral, the response of the slip rate may be amplified at some periods, which values are analytically related to the physical parameters of the problem. Such predictions therefore hold the potential of constraining fault properties on the MHT, but still await observational counterparts to be applied, as nothing indicates that the variations of seismicity rate on the locked part of the MHT are the direct expressions of variations of the slip rate on its creeping part, and no variations of the slip rate have been singled out from the GPS measurements to this day. When shifting to the locked seismogenic part of the MHT, spring-and-slider models with rate-weakening rheology are insufficient to explain the contrasted responses of the seismicity to the periodic loads that tides and monsoon both place on the MHT. Instead, we resort to numerical simulations using the Boundary Integral CYCLes of Earthquakes algorithm and examine the response of a 2D finite fault embedded with a rate-weakening patch to harmonic stress perturbations of various periods. We show that such simulations are able to reproduce results consistent with a gradual amplification of sensitivity as the perturbing period get larger, up to a critical period corresponding to the characteristic time of evolution of the seismicity in response to a step-like perturbation of stress. This increase of sensitivity was not reproduced by simple 1D-spring-slider systems, probably because of the complexity of the nucleation process, reproduced only by 2D-fault models. When the nucleation zone is close to its critical unstable size, its growth becomes highly sensitive to any external perturbations and the timings of produced events may therefore find themselves highly affected. A fully analytical framework has yet to be developed and further work is needed to fully describe the behavior of the fault in terms of physical parameters, which will likely provide the keys to deduce constitutive properties of the MHT from seismological observations.
Resumo:
A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.
In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.
We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.
Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.
This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.
Resumo:
Studies in turbulence often focus on two flow conditions, both of which occur frequently in real-world flows and are sought-after for their value in advancing turbulence theory. These are the high Reynolds number regime and the effect of wall surface roughness. In this dissertation, a Large-Eddy Simulation (LES) recreates both conditions over a wide range of Reynolds numbers Reτ = O(102)-O(108) and accounts for roughness by locally modeling the statistical effects of near-wall anisotropic fine scales in a thin layer immediately above the rough surface. A subgrid, roughness-corrected wall model is introduced to dynamically transmit this modeled information from the wall to the outer LES, which uses a stretched-vortex subgrid-scale model operating in the bulk of the flow. Of primary interest is the Reynolds number and roughness dependence of these flows in terms of first and second order statistics. The LES is first applied to a fully turbulent uniformly-smooth/rough channel flow to capture the flow dynamics over smooth, transitionally rough and fully rough regimes. Results include a Moody-like diagram for the wall averaged friction factor, believed to be the first of its kind obtained from LES. Confirmation is found for experimentally observed logarithmic behavior in the normalized stream-wise turbulent intensities. Tight logarithmic collapse, scaled on the wall friction velocity, is found for smooth-wall flows when Reτ ≥ O(106) and in fully rough cases. Since the wall model operates locally and dynamically, the framework is used to investigate non-uniform roughness distribution cases in a channel, where the flow adjustments to sudden surface changes are investigated. Recovery of mean quantities and turbulent statistics after transitions are discussed qualitatively and quantitatively at various roughness and Reynolds number levels. The internal boundary layer, which is defined as the border between the flow affected by the new surface condition and the unaffected part, is computed, and a collapse of the profiles on a length scale containing the logarithm of friction Reynolds number is presented. Finally, we turn to the possibility of expanding the present framework to accommodate more general geometries. As a first step, the whole LES framework is modified for use in the curvilinear geometry of a fully-developed turbulent pipe flow, with implementation carried out in a spectral element solver capable of handling complex wall profiles. The friction factors have shown favorable agreement with the superpipe data, and the LES estimates of the Karman constant and additive constant of the log-law closely match values obtained from experiment.
Resumo:
The Edge Function method formerly developed by Quinlan(25) is applied to solve the problem of thin elastic plates resting on spring supported foundations subjected to lateral loads the method can be applied to plates of any convex polygonal shapes, however, since most plates are rectangular in shape, this specific class is investigated in this thesis. The method discussed can also be applied easily to other kinds of foundation models (e.g. springs connected to each other by a membrane) as long as the resulting differential equation is linear. In chapter VII, solution of a specific problem is compared with a known solution from literature. In chapter VIII, further comparisons are given. The problems of concentrated load on an edge and later on a corner of a plate as long as they are far away from other boundaries are also given in the chapter and generalized to other loading intensities and/or plates springs constants for Poisson's ratio equal to 0.2
Resumo:
This thesis introduces new tools for geometric discretization in computer graphics and computational physics. Our work builds upon the duality between weighted triangulations and power diagrams to provide concise, yet expressive discretization of manifolds and differential operators. Our exposition begins with a review of the construction of power diagrams, followed by novel optimization procedures to fully control the local volume and spatial distribution of power cells. Based on this power diagram framework, we develop a new family of discrete differential operators, an effective stippling algorithm, as well as a new fluid solver for Lagrangian particles. We then turn our attention to applications in geometry processing. We show that orthogonal primal-dual meshes augment the notion of local metric in non-flat discrete surfaces. In particular, we introduce a reduced set of coordinates for the construction of orthogonal primal-dual structures of arbitrary topology, and provide alternative metric characterizations through convex optimizations. We finally leverage these novel theoretical contributions to generate well-centered primal-dual meshes, sphere packing on surfaces, and self-supporting triangulations.
Resumo:
O objetivo do presente trabalho é comparar, do ponto de vista elétrico, a membrana do neurônio ganglionar com a da célula de neuroblastoma, analisando os efeitos das cargas fixas sobre o potencial elétrico nas superfícies da bicamada lipídica e também sobre o comportamento do perfil de potencial através da membrana, considerando as condiçõesfísico-químicas do estado de repouso e do estado de potencial de ação. As condições para a ocorrência dos referidos estados foram baseadas em valores numéricos de parâmetros elétricos e químicos, característicos dessas células, obtidos na literatura. O neurônio ganglionar exemplifica um neurônio sadio, e a célula de neuroblastoma, que é uma célula tumoral, exemplifica um neurônio patológico, alterado por esta condição. O neuroblastoma é um tumor que se origina das células da crista neural (neuroblastos), que é uma estrutura embrionária que dá origem a muitas partes do sistema nervoso, podendo surgir em diversos locais do organismo, desde a região do crânio até a área mais inferior da coluna. O modelo adotado para simular a membrana de neurônio inclui: (a) as distribuições espaciais de cargas elétricas fixas no glicocálix e na rede de proteínas citoplasmáticas; (b) as distribuições de cargas na solução eletrolítica dos meios externo e interno; e (c) as cargas superficiais da bicamada lipídica. Os resultados que obtivemos mostraram que, nos estados de repouso e de ação, os potenciais superficiais da bicamada interno (ÁSbc) e externo (ÁSgb) da célula de neuroblastoma não sofrem alteração mensurável, quando a densidade de carga na superfície interna (QSbc) torna-se 50 vezes mais negativa, tanto para uma densidade de carga na superfície externa da bicamada nula (QSgb = 0), como para um valor de QSgb 6= 0. Porém, no estado de repouso, uma leve queda em ÁSbc do neur^onio ganglionar pode ser observada com este nível de variação de carga, sendo que ÁSgb do neurônio ganglionar é mais negativo quando QSgb = 1=1100 e/A2. No estado de ação, para QSgb = 0, o aumento da negatividade de QSbc não provoca alteração detectável de ÁSbc e ÁSgb para os dois neurônios. Quando consideramos QSgb = 1=1100 e/A2, ÁSgb do neurônio ganglionar se torna mais negativo, não se observando variações detectáveis nos potenciais superficiais da célula de neuroblastoma. Tanto no repouso quanto no estado de ação, ÁSgb das duas células não sofre variação sensível com o aumento da negatividade da carga fixa distribuída espacialmente no citoplasma. Já a ÁSbc sofre uma queda gradativa nos dois tipos celulares; porém, no estado de ação, esta queda é mais rápida. Descobrimos diferenças importantes nos perfis de potencial das duas células, especialmente na região do glicocálix.
Resumo:
47 p.