982 resultados para Mathematical techniques


Relevância:

20.00% 20.00%

Publicador:

Resumo:

1. The effect of 2,2’-bis-[α-(trimethylammonium)methyl]azobenzene (2BQ), a photoisomerizable competitive antagonist, was studied at the nicotinic acetycholine receptor of Electrophorus electroplaques using voltage-jump and light-flash techniques.

2. 2BQ, at concentrations below 3 μΜ, reduced the amplitude of voltage-jump relaxations but had little effect on the voltage-jump relaxation time constants under all experimental conditions. At higher concentrations and voltages more negative than -150 mV, 2BQ caused significant open channel blockade.

3. Dose-ratio studies showed that the cis and trans isomers of 2BQ have equilibrium binding constants (K) of .33 and 1.0 μΜ, respectively. The binding constants determined for both isomers are independent of temperature, voltage, agonist concentration, and the nature of the agonist.

4. In a solution of predominantly cis-2BQ, visible-light flashes led to a net cis→trans isomerization and caused an increase in the agonist-induced current. This increase had at least two exponential components; the larger amplitude component had the same time constant as a subsequent voltage-jump relaxation; the smaller amplitude component was investigated using ultraviolet light flashes.

5. In a solution of predominantly trans-2BQ, UV-light flashes led to a net trans→cis isomerization and caused a net decrease in the agonist-induced current. This effect had at least two exponential components. The smaller and faster component was an increase in agonist-induced current and had a similar time constant to the voltage-jump relaxation. The larger component was a slow decrease in the agonist-induced current with rate constant approximately an order of magnitude less than that of the voltage-jump relaxation. This slow component provided a measure of the rate constant for dissociation of cis-2BQ (k_ = 60/s at 20°C). Simple modelling of the slope of the dose-rate curves yields an association rate constant of 1.6 x 108/M/s. This agrees with the association rate constant of 1.8 x 108/M/s estimated from the binding constant (Ki). The Q10 of the dissociation rate constant of cis-2BQ was 3.3 between 6° and 20°C. The rate constants for association and dissociation of cis-28Q at receptors are independent of voltage, agonist concentration, and the nature of the agonist.

6. We have measured the molecular rate constants of a competitive antagonist which has roughly the same K as d-tubocurarine but interacts more slowly with the receptor. This leads to the conclusion that curare itself has an association rate constant of 4 x 109/M/s or roughly as fast as possible for an encounter-limited reaction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Intrinsically fuzzy morphological erosion and dilation are extended to a total of eight operations that have been formulated in terms of a single morphological operation--biased dilation. Based on the spatial coding of a fuzzy variable, a bidirectional projection concept is proposed. Thus, fuzzy logic operations, arithmetic operations, gray-scale dilation, and erosion for the extended intrinsically fuzzy morphological operations can be included in a unified algorithm with only biased dilation and fuzzy logic operations. To execute this image algebra approach we present a cellular two-layer processing architecture that consists of a biased dilation processor and a fuzzy logic processor. (C) 1996 Optical Society of America

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A more powerful tool for binary image processing, i.e., logic-operated mathematical morphology (LOMM), is proposed. With LOMM the image and the structuring element (SE) are treated as binary logical variables, and the MULTIPLY between the image and the SE in correlation is replaced with 16 logical operations. A total of 12 LOMM operations are obtained. The optical implementation of LOMM is described. The application of LOMM and its experimental results are also presented. (C) 1999 Optical Society of America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fuzzy sets in the subject space are transformed to fuzzy solid sets in an increased object space on the basis of the development of the local umbra concept. Further, a counting transform is defined for reconstructing the fuzzy sets from the fuzzy solid sets, and the dilation and erosion operators in mathematical morphology are redefined in the fuzzy solid-set space. The algebraic structures of fuzzy solid sets can lead not only to fuzzy logic but also to arithmetic operations. Thus a fuzzy solid-set image algebra of two image transforms and five set operators is defined that can formulate binary and gray-scale morphological image-processing functions consisting of dilation, erosion, intersection, union, complement, addition, subtraction, and reflection in a unified form. A cellular set-logic array architecture is suggested for executing this image algebra. The optical implementation of the architecture, based on area coding of gray-scale values, is demonstrated. (C) 1995 Optical Society of America

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fuzzification is introduced into gray-scale mathematical morphology by using two-input one-output fuzzy rule-based inference systems. The fuzzy inferring dilation or erosion is defined from the approximate reasoning of the two consequences of a dilation or an erosion and an extended rank-order operation. The fuzzy inference systems with numbers of rules and fuzzy membership functions are further reduced to a simple fuzzy system formulated by only an exponential two-input one-output function. Such a one-function fuzzy inference system is able to approach complex fuzzy inference systems by using two specified parameters within it-a proportion to characterize the fuzzy degree and an exponent to depict the nonlinearity in the inferring. The proposed fuzzy inferring morphological operators tend to keep the object details comparable to the structuring element and to smooth the conventional morphological operations. Based on digital area coding of a gray-scale image, incoherently optical correlation for neighboring connection, and optical thresholding for rank-order operations, a fuzzy inference system can be realized optically in parallel. (C) 1996 Society of Photo-Optical Instrumentation Engineers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An optoelectronic implementation based on optical neighborhood operations and electronic nonlinear feedback is proposed to perform morphological image processing such as erosion, dilation, opening, closing and edge detection. Results of a numerical simulation are given and experimentally verified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neste trabalho, é proposta uma nova família de métodos a ser aplicada à otimização de problemas multimodais. Nestas técnicas, primeiramente são geradas soluções iniciais com o intuito de explorar o espaço de busca. Em seguida, com a finalidade de encontrar mais de um ótimo, estas soluções são agrupadas em subespaços utilizando um algoritmo de clusterização nebulosa. Finalmente, são feitas buscas locais através de métodos determinísticos de otimização dentro de cada subespaço gerado na fase anterior com a finalidade de encontrar-se o ótimo local. A família de métodos é formada por seis variantes, combinando três esquemas de inicialização das soluções na primeira fase e dois algoritmos de busca local na terceira. A fim de que esta nova família de métodos possa ser avaliada, seus constituintes são comparados com outras metodologias utilizando problemas da literatura e os resultados alcançados são promissores.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A technique for obtaining approximate periodic solutions to nonlinear ordinary differential equations is investigated. The approach is based on defining an equivalent differential equation whose exact periodic solution is known. Emphasis is placed on the mathematical justification of the approach. The relationship between the differential equation error and the solution error is investigated, and, under certain conditions, bounds are obtained on the latter. The technique employed is to consider the equation governing the exact solution error as a two point boundary value problem. Among other things, the analysis indicates that if an exact periodic solution to the original system exists, it is always possible to bound the error by selecting an appropriate equivalent system.

Three equivalence criteria for minimizing the differential equation error are compared, namely, minimum mean square error, minimum mean absolute value error, and minimum maximum absolute value error. The problem is analyzed by way of example, and it is concluded that, on the average, the minimum mean square error is the most appropriate criterion to use.

A comparison is made between the use of linear and cubic auxiliary systems for obtaining approximate solutions. In the examples considered, the cubic system provides noticeable improvement over the linear system in describing periodic response.

A comparison of the present approach to some of the more classical techniques is included. It is shown that certain of the standard approaches where a solution form is assumed can yield erroneous qualitative results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O aumento nos rejeitos industriais e a contínua produção de resíduos causam muitas preocupações no âmbito ambiental. Neste contexto, o descarte de pneus usados tem se tornado um grande problema por conta da pequena atenção que se dá à sua destinação final. Assim sendo, essa pesquisa propõe a produção de uma mistura polimérica com polipropileno (PP), a borracha de etileno-propileno-dieno (EPDM) e o pó de pneu (SRT). A Metodologia de Superfície de Resposta (MSR), coleção de técnicas estatísticas e matemáticas úteis para desenvolver, melhorar e optimizar processos, foi aplicada à investigação das misturas ternárias. Após o processamento adequado em extrusora de dupla rosca e a moldagem por injeção, as propriedades mecânicas de resistência à tração e resistência ao impacto foram determinadas e utilizadas como variáveis resposta. Ao mesmo tempo, a microscopia eletrônica de varredura (MEV) foi usada para a investigação da morfologia das diferentes misturas e melhor interpretação dos resultados. Com as ferramentas estatísticas específicas e um número mínimo de experimentos foi possível o desenvolvimento de modelos de superfícies de resposta e a otimização das concentrações dos diferentes componentes da mistura em função do desempenho mecânico e além disso com a modificação da granulometria conseguimos um aumento ainda mais significativo deste desempenho mecânico.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.

The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.

The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.

The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of principles from evolutionary biology has long been used to gain new insights into the progression and clinical control of both infectious diseases and neoplasms. This iterative evolutionary process consists of expansion, diversification and selection within an adaptive landscape - species are subject to random genetic or epigenetic alterations that result in variations; genetic information is inherited through asexual reproduction and strong selective pressures such as therapeutic intervention can lead to the adaptation and expansion of resistant variants. These principles lie at the center of modern evolutionary synthesis and constitute the primary reasons for the development of resistance and therapeutic failure, but also provide a framework that allows for more effective control.

A model system for studying the evolution of resistance and control of therapeutic failure is the treatment of chronic HIV-1 infection by broadly neutralizing antibody (bNAb) therapy. A relatively recent discovery is that a minority of HIV-infected individuals can produce broadly neutralizing antibodies, that is, antibodies that inhibit infection by many strains of HIV. Passive transfer of human antibodies for the prevention and treatment of HIV-1 infection is increasingly being considered as an alternative to a conventional vaccine. However, recent evolution studies have uncovered that antibody treatment can exert selective pressure on virus that results in the rapid evolution of resistance. In certain cases, complete resistance to an antibody is conferred with a single amino acid substitution on the viral envelope of HIV.

The challenges in uncovering resistance mechanisms and designing effective combination strategies to control evolutionary processes and prevent therapeutic failure apply more broadly. We are motivated by two questions: Can we predict the evolution to resistance by characterizing genetic alterations that contribute to modified phenotypic fitness? Given an evolutionary landscape and a set of candidate therapies, can we computationally synthesize treatment strategies that control evolution to resistance?

To address the first question, we propose a mathematical framework to reason about evolutionary dynamics of HIV from computationally derived Gibbs energy fitness landscapes -- expanding the theoretical concept of an evolutionary landscape originally conceived by Sewall Wright to a computable, quantifiable, multidimensional, structurally defined fitness surface upon which to study complex HIV evolutionary outcomes.

To design combination treatment strategies that control evolution to resistance, we propose a methodology that solves for optimal combinations and concentrations of candidate therapies, and allows for the ability to quantifiably explore tradeoffs in treatment design, such as limiting the number of candidate therapies in the combination, dosage constraints and robustness to error. Our algorithm is based on the application of recent results in optimal control to an HIV evolutionary dynamics model and is constructed from experimentally derived antibody resistant phenotypes and their single antibody pharmacodynamics. This method represents a first step towards integrating principled engineering techniques with an experimentally based mathematical model in the rational design of combination treatment strategies and offers predictive understanding of the effects of combination therapies of evolutionary dynamics and resistance of HIV. Preliminary in vitro studies suggest that the combination antibody therapies predicted by our algorithm can neutralize heterogeneous viral populations despite containing resistant mutations.