983 resultados para modern techniques


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.

The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.

The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.

The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A odontologia moderna utiliza métodos e técnicas ultraconservadores no intuito de corrigir os diversos tipos de alterações cromáticas observadas clinicamente. Os meios empregados baseiam-se na utilização de substâncias químicas à base de peróxidos presentes em diversas concentrações. O presente estudo objetivou avaliar a microestrutura de três resinas compostas fotossensíveis submetidas à aplicação de um agente clareador a base de peróxido de hidrogênio a 35% (Whiteness HP Maxx - fabricante: FGM), ativado por uma fonte híbrida de energia luminosa (Aparelho de Laser-Led Whitening Lase, fabricante: DMC). Para isso, foram confeccionados 30 corpos de prova (CDP) 10 para cada grupo, no formato de discos, com 13 mm de diâmetro e 2,0 mm de espessura em uma matriz de teflon e aço inox, fotoativados por um aparelho de luz halógena convencional (Optilux 401 - Demetron/UR) por 40 segundos com densidade de potência média igual a 450 mW/cm2. Os grupos foram dispostos da seguinte forma: Grupo 1 - resina microparticulada (Durafill VS - fabricante: Heraeus Kulzer); Grupo 2 - resina micro-híbrida (Esthet-X - fabricante: Dentsply); e Grupo 3 resina nanoparticulada (Filtek Supreme XT fabricante: 3M ESPE). Todos os materiais restauradores utilizados eram da cor A2. Após serem submetidos à sequência de acabamento e polimento os CDP foram armazenados por sete dias em saliva artificial, limpos em ultra-som, envelhecidos artificialmente de acordo com a norma ASTM G 154. Os CDP dos três grupos foram aleatoriamente divididos em 2 subgrupos (ST sem tratamento e CT com tratamento) e finalmente submetidos aos experimentos. Os CDP dos subgrupos 1-ST, 2- ST e 3-ST foram triturados (SPEX SamplePrep 8000-series, marca: Mixer/Mills) seguido pela verificação dos materiais por meio de um espectrômetro (marca/modelo: Shimadzu EDX 720) para certificação da ausência de elementos pertencentes ao meio de moagem e por fim foram levados a um difrator de raios-X (marca / modelo: Philips -PW 3040 -X'Celerator- 40kV; 30mA; (λ): CuKα; 0,6; 0,2mm; 0,05 (2θ); 2s; 10-90 (2θ). Em seguida os CDP dos subgrupos 1-CT, 2- CT e 3-CT foram tratados com o peróxido de hidrogênio de acordo com o protocolo do fabricante para a fonte híbrida luminosa de energia selecionada, totalizando 9 aplicações de 10 minutos, onde eram respeitados os tempos de 3 minutos de ativação por 20 segundos de descanso, finalizando 10 minutos em cada aplicação. Mediante a este tratamento, os CDP dos subgrupos CT eram verificados e avaliados pelo mesmo método descrito anteriormente. Após interpretação gráfica, análise comparativa por meio do processamento digital das imagens no programa KS400 3.0 (Carl Zeiss Vision) e análise de concordância por cinco avaliadores calibrados utilizando um escore, pôde-se concluir que houve degradação estrutural e que as estruturas cristalinas das resinas estudadas foram afetadas de forma distinta quando tratadas pelo peróxido de hidrogênio; onde observou-se que: Grupo 1 > Grupo 3 > Grupo 2. Foi sugerido a realização de novos estudos, relacionados à interação do peróxido de hidrogênio às resinas compostas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of principles from evolutionary biology has long been used to gain new insights into the progression and clinical control of both infectious diseases and neoplasms. This iterative evolutionary process consists of expansion, diversification and selection within an adaptive landscape - species are subject to random genetic or epigenetic alterations that result in variations; genetic information is inherited through asexual reproduction and strong selective pressures such as therapeutic intervention can lead to the adaptation and expansion of resistant variants. These principles lie at the center of modern evolutionary synthesis and constitute the primary reasons for the development of resistance and therapeutic failure, but also provide a framework that allows for more effective control.

A model system for studying the evolution of resistance and control of therapeutic failure is the treatment of chronic HIV-1 infection by broadly neutralizing antibody (bNAb) therapy. A relatively recent discovery is that a minority of HIV-infected individuals can produce broadly neutralizing antibodies, that is, antibodies that inhibit infection by many strains of HIV. Passive transfer of human antibodies for the prevention and treatment of HIV-1 infection is increasingly being considered as an alternative to a conventional vaccine. However, recent evolution studies have uncovered that antibody treatment can exert selective pressure on virus that results in the rapid evolution of resistance. In certain cases, complete resistance to an antibody is conferred with a single amino acid substitution on the viral envelope of HIV.

The challenges in uncovering resistance mechanisms and designing effective combination strategies to control evolutionary processes and prevent therapeutic failure apply more broadly. We are motivated by two questions: Can we predict the evolution to resistance by characterizing genetic alterations that contribute to modified phenotypic fitness? Given an evolutionary landscape and a set of candidate therapies, can we computationally synthesize treatment strategies that control evolution to resistance?

To address the first question, we propose a mathematical framework to reason about evolutionary dynamics of HIV from computationally derived Gibbs energy fitness landscapes -- expanding the theoretical concept of an evolutionary landscape originally conceived by Sewall Wright to a computable, quantifiable, multidimensional, structurally defined fitness surface upon which to study complex HIV evolutionary outcomes.

To design combination treatment strategies that control evolution to resistance, we propose a methodology that solves for optimal combinations and concentrations of candidate therapies, and allows for the ability to quantifiably explore tradeoffs in treatment design, such as limiting the number of candidate therapies in the combination, dosage constraints and robustness to error. Our algorithm is based on the application of recent results in optimal control to an HIV evolutionary dynamics model and is constructed from experimentally derived antibody resistant phenotypes and their single antibody pharmacodynamics. This method represents a first step towards integrating principled engineering techniques with an experimentally based mathematical model in the rational design of combination treatment strategies and offers predictive understanding of the effects of combination therapies of evolutionary dynamics and resistance of HIV. Preliminary in vitro studies suggest that the combination antibody therapies predicted by our algorithm can neutralize heterogeneous viral populations despite containing resistant mutations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I have been asked by administration, how much of our collection could go into storage. They optimistically hoping for a room or two for faculty/staff offices, as some buildings need renovation or need to be closed due to safety issues. Clearly, much of the population believes that all/most library materials are available on-line – free. I will present the results of our survey’s of material held and available on-line and space “freed” thanks to archiving. How little space is freed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.

The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.

We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Surface mass loads come in many different varieties, including the oceans, atmosphere, rivers, lakes, glaciers, ice caps, and snow fields. The loads migrate over Earth's surface on time scales that range from less than a day to many thousand years. The weights of the shifting loads exert normal forces on Earth's surface. Since the Earth is not perfectly rigid, the applied pressure deforms the shape of the solid Earth in a manner controlled by the material properties of Earth's interior. One of the most prominent types of surface mass loading, ocean tidal loading (OTL), comes from the periodic rise and fall in sea-surface height due to the gravitational influence of celestial objects, such as the moon and sun. Depending on geographic location, the surface displacements induced by OTL typically range from millimeters to several centimeters in amplitude, which may be inferred from Global Navigation and Satellite System (GNSS) measurements with sub-millimeter precision. Spatiotemporal characteristics of observed OTL-induced surface displacements may therefore be exploited to probe Earth structure. In this thesis, I present descriptions of contemporary observational and modeling techniques used to explore Earth's deformation response to OTL and other varieties of surface mass loading. With the aim to extract information about Earth's density and elastic structure from observations of the response to OTL, I investigate the sensitivity of OTL-induced surface displacements to perturbations in the material structure. As a case study, I compute and compare the observed and predicted OTL-induced surface displacements for a network of GNSS receivers across South America. The residuals in three distinct and dominant tidal bands are sub-millimeter in amplitude, indicating that modern ocean-tide and elastic-Earth models well predict the observed displacement response in that region. Nevertheless, the sub-millimeter residuals exhibit regional spatial coherency that cannot be explained entirely by random observational uncertainties and that suggests deficiencies in the forward-model assumptions. In particular, the discrepancies may reveal sensitivities to deviations from spherically symmetric, non-rotating, elastic, and isotropic (SNREI) Earth structure due to the presence of the South American craton.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The advent of molecular biology has had a dramatic impact on all aspects of biology, not least applied microbial ecology. Microbiological testing of water has traditionally depended largely on culture techniques. Growing understanding that only a small proportion of microbial species are culturable, and that many microorganisms may attain a viable but non-culturable state, has promoted the development of novel approaches to monitoring pathogens in the environment. This has been paralleled by an increased awareness of the surprising genetic diversity of natural microbial populations. By targeting gene sequences that are specific for particular microorganisms, for example genes that encode diagnostic enzymes, or species-specific domains of conserved genes such as 16S ribosomal RNA coding sequences (rrn genes), the problems of culture can be avoided. Technical developments, notably in the area of in vitro amplification of DNA using the polymerase chain reaction (PCR), now permit routine detection and identification of specific microorganisms, even when present in very low numbers. Although the techniques of molecular biology have provided some very powerful tools for environmental microbiology, it should not be forgotten that these have their own drawbacks and biases in sampling. For example, molecular techniques are dependent on efficient lysis and recovery of nucleic acids from both vegetative forms and spores of microbial species that may differ radically when growing in the laboratory compared with the natural environment. Furthermore, PCR amplification can introduce its own bias depending on the nature of the oligonucleotide primers utilised. However, despite these potential caveats, it seems likely that a molecular biological approach, particularly with its potential for automation, will provide the mainstay of diagnostic technology for the foreseeable future.