928 resultados para powerful owl


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seismic reflection methods have been extensively used to probe the Earth's crust and suggest the nature of its formative processes. The analysis of multi-offset seismic reflection data extends the technique from a reconnaissance method to a powerful scientific tool that can be applied to test specific hypotheses. The treatment of reflections at multiple offsets becomes tractable if the assumptions of high-frequency rays are valid for the problem being considered. Their validity can be tested by applying the methods of analysis to full wave synthetics.

Three studies illustrate the application of these principles to investigations of the nature of the crust in southern California. A survey shot by the COCORP consortium in 1977 across the San Andreas fault near Parkfield revealed events in the record sections whose arrival time decreased with offset. The reflectors generating these events are imaged using a multi-offset three-dimensional Kirchhoff migration. Migrations of full wave acoustic synthetics having the same limitations in geometric coverage as the field survey demonstrate the utility of this back projection process for imaging. The migrated depth sections show the locations of the major physical boundaries of the San Andreas fault zone. The zone is bounded on the southwest by a near-vertical fault juxtaposing a Tertiary sedimentary section against uplifted crystalline rocks of the fault zone block. On the northeast, the fault zone is bounded by a fault dipping into the San Andreas, which includes slices of serpentinized ultramafics, intersecting it at 3 km depth. These interpretations can be made despite complications introduced by lateral heterogeneities.

In 1985 the Calcrust consortium designed a survey in the eastern Mojave desert to image structures in both the shallow and the deep crust. Preliminary field experiments showed that the major geophysical acquisition problem to be solved was the poor penetration of seismic energy through a low-velocity surface layer. Its effects could be mitigated through special acquisition and processing techniques. Data obtained from industry showed that quality data could be obtained from areas having a deeper, older sedimentary cover, causing a re-definition of the geologic objectives. Long offset stationary arrays were designed to provide reversed, wider angle coverage of the deep crust over parts of the survey. The preliminary field tests and constant monitoring of data quality and parameter adjustment allowed 108 km of excellent crustal data to be obtained.

This dataset, along with two others from the central and western Mojave, was used to constrain rock properties and the physical condition of the crust. The multi-offset analysis proceeded in two steps. First, an increase in reflection peak frequency with offset is indicative of a thinly layered reflector. The thickness and velocity contrast of the layering can be calculated from the spectral dispersion, to discriminate between structures resulting from broad scale or local effects. Second, the amplitude effects at different offsets of P-P scattering from weak elastic heterogeneities indicate whether the signs of the changes in density, rigidity, and Lame's parameter at the reflector agree or are opposed. The effects of reflection generation and propagation in a heterogeneous, anisotropic crust were contained by the design of the experiment and the simplicity of the observed amplitude and frequency trends. Multi-offset spectra and amplitude trend stacks of the three Mojave Desert datasets suggest that the most reflective structures in the middle crust are strong Poisson's ratio (σ) contrasts. Porous zones or the juxtaposition of units of mutually distant origin are indicated. Heterogeneities in σ increase towards the top of a basal crustal zone at ~22 km depth. The transition to the basal zone and to the mantle include increases in σ. The Moho itself includes ~400 m layering having a velocity higher than that of the uppermost mantle. The Moho maintains the same configuration across the Mojave despite 5 km of crustal thinning near the Colorado River. This indicates that Miocene extension there either thinned just the basal zone, or that the basal zone developed regionally after the extensional event.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The use of pseudoephedrine as a practical chiral auxiliary for asymmetric synthesis is describe. Both enantiomers of pseudoephedrine are inexpensive commodity chemicals and can be N-acylated in high yields to form tertiary amides. In the presence of lithium chloride, the enolates of the corresponding pseudoephedrine amides undergo highly diastereoselective a1kylations with a wide range of alkyl halides to afford α-substituted products in high yields. These products can then be transformed in a single operation into highly enantiomerically enriched carboxylic acids, alcohols, and aldehydes. Lithium amidotrihydroborate (LAB) is shown to be a powerful reductant for the selective reduction of tertiary amides in general and pseudoephedrine amides in particular to form primary alcohols.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Understanding the mechanisms of enzymes is crucial for our understanding of their role in biology and for designing methods to perturb or harness their activities for medical treatments, industrial processes, or biological engineering. One aspect of enzymes that makes them difficult to fully understand is that they are in constant motion, and these motions and the conformations adopted throughout these transitions often play a role in their function.

Traditionally, it has been difficult to isolate a protein in a particular conformation to determine what role each form plays in the reaction or biology of that enzyme. A new technology, computational protein design, makes the isolation of various conformations possible, and therefore is an extremely powerful tool in enabling a fuller understanding of the role a protein conformation plays in various biological processes.

One such protein that undergoes large structural shifts during different activities is human type II transglutaminase (TG2). TG2 is an enzyme that exists in two dramatically different conformational states: (1) an open, extended form, which is adopted upon the binding of calcium, and (2) a closed, compact form, which is adopted upon the binding of GTP or GDP. TG2 possess two separate active sites, each with a radically different activity. This open, calcium-bound form of TG2 is believed to act as a transglutaminse, where it catalyzes the formation of an isopeptide bond between the sidechain of a peptide-bound glutamine and a primary amine. The closed, GTP-bound conformation is believed to act as a GTPase. TG2 is also implicated in a variety of biological and pathological processes.

To better understand the effects of TG2’s conformations on its activities and pathological processes, we set out to design variants of TG2 isolated in either the closed or open conformations. We were able to design open-locked and closed-biased TG2 variants, and use these designs to unseat the current understanding of the activities and their concurrent conformations of TG2 and explore each conformation’s role in celiac disease models. This work also enabled us to help explain older confusing results in regards to this enzyme and its activities. The new model for TG2 activity has immense implications for our understanding of its functional capabilities in various environments, and for our ability to understand which conformations need to be inhibited in the design of new drugs for diseases in which TG2’s activities are believed to elicit pathological effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis summarizes the application of conventional and modern electron paramagnetic resonance (EPR) techniques to establish proximity relationships between paramagnetic metal centers in metalloproteins and between metal centers and magnetic ligand nuclei in two important and timely membrane proteins: succinate:ubiquinone oxidoreductase (SQR) from Paracoccus denitrificans and particulate methane monooxygenase (pMMO) from Methylococcus capsulatus. Such proximity relationships are thought to be critical to the biological function and the associated biochemistry mediated by the metal centers in these proteins. A mechanistic understanding of biological function relies heavily on structure-function relationships and the knowledge of how molecular structure and electronic properties of the metal centers influence the reactivity in metalloenzymes. EPR spectroscopy has proven to be one of the most powerful techniques towards obtaining information about interactions between metal centers as well as defining ligand structures. SQR is an electron transport enzyme wherein the substrates, organic and metallic cofactors are held relatively far apart. Here, the proximity relationships of the metallic cofactors were studied through their weak spin-spin interactions by means of EPR power saturation and electron spin-lattice (T_1) measurements, when the enzyme was poised at designated reduction levels. Analysis of the electron T_1 measurements for the S-3 center when the b-heme is paramagnetic led to a detailed analysis of the dipolar interactions and distance determination between two interacting metal centers. Studies of ligand environment of the metal centers by electron spin echo envelope modulation (ESEEM) spectroscopy resulted in the identication of peptide nitrogens as coupled nuclei in the environment of the S-1 and S-3 centers.

Finally, an EPR model was developed to describe the ferromagnetically coupled trinuclear copper clusters in pMMO when the enzyme is oxidized. The Cu(II) ions in these clusters appear to be strongly exchange coupled, and the EPR is consistent with equilateral triangular arrangements of type 2 copper ions. These results offer the first glimpse of the magneto-structural correlations for a trinuclear copper cluster of this type, which, until the work on pMMO, has had no precedent in the metalloprotein literature. Such trinuclear copper clusters are even rare in synthetic models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Smartphones and other powerful sensor-equipped consumer devices make it possible to sense the physical world at an unprecedented scale. Nearly 2 million Android and iOS devices are activated every day, each carrying numerous sensors and a high-speed internet connection. Whereas traditional sensor networks have typically deployed a fixed number of devices to sense a particular phenomena, community networks can grow as additional participants choose to install apps and join the network. In principle, this allows networks of thousands or millions of sensors to be created quickly and at low cost. However, making reliable inferences about the world using so many community sensors involves several challenges, including scalability, data quality, mobility, and user privacy.

This thesis focuses on how learning at both the sensor- and network-level can provide scalable techniques for data collection and event detection. First, this thesis considers the abstract problem of distributed algorithms for data collection, and proposes a distributed, online approach to selecting which set of sensors should be queried. In addition to providing theoretical guarantees for submodular objective functions, the approach is also compatible with local rules or heuristics for detecting and transmitting potentially valuable observations. Next, the thesis presents a decentralized algorithm for spatial event detection, and describes its use detecting strong earthquakes within the Caltech Community Seismic Network. Despite the fact that strong earthquakes are rare and complex events, and that community sensors can be very noisy, our decentralized anomaly detection approach obtains theoretical guarantees for event detection performance while simultaneously limiting the rate of false alarms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optical microscopy has become an indispensable tool for biological researches since its invention, mostly owing to its sub-cellular spatial resolutions, non-invasiveness, instrumental simplicity, and the intuitive observations it provides. Nonetheless, obtaining reliable, quantitative spatial information from conventional wide-field optical microscopy is not always intuitive as it appears to be. This is because in the acquired images of optical microscopy the information about out-of-focus regions is spatially blurred and mixed with in-focus information. In other words, conventional wide-field optical microscopy transforms the three-dimensional spatial information, or volumetric information about the objects into a two-dimensional form in each acquired image, and therefore distorts the spatial information about the object. Several fluorescence holography-based methods have demonstrated the ability to obtain three-dimensional information about the objects, but these methods generally rely on decomposing stereoscopic visualizations to extract volumetric information and are unable to resolve complex 3-dimensional structures such as a multi-layer sphere.

The concept of optical-sectioning techniques, on the other hand, is to detect only two-dimensional information about an object at each acquisition. Specifically, each image obtained by optical-sectioning techniques contains mainly the information about an optically thin layer inside the object, as if only a thin histological section is being observed at a time. Using such a methodology, obtaining undistorted volumetric information about the object simply requires taking images of the object at sequential depths.

Among existing methods of obtaining volumetric information, the practicability of optical sectioning has made it the most commonly used and most powerful one in biological science. However, when applied to imaging living biological systems, conventional single-point-scanning optical-sectioning techniques often result in certain degrees of photo-damages because of the high focal intensity at the scanning point. In order to overcome such an issue, several wide-field optical-sectioning techniques have been proposed and demonstrated, although not without introducing new limitations and compromises such as low signal-to-background ratios and reduced axial resolutions. As a result, single-point-scanning optical-sectioning techniques remain the most widely used instrumentations for volumetric imaging of living biological systems to date.

In order to develop wide-field optical-sectioning techniques that has equivalent optical performance as single-point-scanning ones, this thesis first introduces the mechanisms and limitations of existing wide-field optical-sectioning techniques, and then brings in our innovations that aim to overcome these limitations. We demonstrate, theoretically and experimentally, that our proposed wide-field optical-sectioning techniques can achieve diffraction-limited optical sectioning, low out-of-focus excitation and high-frame-rate imaging in living biological systems. In addition to such imaging capabilities, our proposed techniques can be instrumentally simple and economic, and are straightforward for implementation on conventional wide-field microscopes. These advantages together show the potential of our innovations to be widely used for high-speed, volumetric fluorescence imaging of living biological systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mitochondria contain a 16.6 kb circular genome encoding 13 proteins as well as mitochondrial tRNAs and rRNAs. Copies of the genome are organized into nucleoids containing both DNA and proteins, including the machinery required for mtDNA replication and transcription. Although mtDNA integrity is essential for cellular and organismal viability, regulation of proliferation of the mitochondrial genome is poorly understood. To elucidate the mechanisms behind this, we chose to study the interplay between mtDNA copy number and the proteins involved in mitochondrial fusion, another required function in cells. Strikingly, we found that mouse embryonic fibroblasts lacking fusion also had a mtDNA copy number deficit. To understand this phenomenon further, we analyzed the binding of mitochondrial transcription factor A, whose role in transcription, replication, and packaging of the genome is well-established and crucial for cellular maintenance. Using ChIP-seq, we were able to detect largely uniform, non-specific binding across the genome, with no occupancy in the known specific binding sites in the regulatory region. We did detect a single binding site directly upstream of a known origin of replication, suggesting that TFAM may play a direct role in replication. Finally, although TFAM has been previously shown to localize to the nuclear genome, we found no evidence for such binding sites in our system.

To further understand the regulation of mtDNA by other proteins, we analyzed publicly available ChIP-seq datasets from ENCODE, modENCODE, and mouseENCODE for evidence of nuclear transcription factor binding to the mitochondrial genome. We identified eight human transcription factors and three mouse transcription factors that demonstrated binding events with the classical strand asymmetrical morphology of classical binding sites. ChIP-seq is a powerful tool for understanding the interactions between proteins and the mitochondrial genome, and future studies promise to further the understanding of how mtDNA is regulated within the nucleoid.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modern robots are increasingly expected to function in uncertain and dynamically challenging environments, often in proximity with humans. In addition, wide scale adoption of robots requires on-the-fly adaptability of software for diverse application. These requirements strongly suggest the need to adopt formal representations of high level goals and safety specifications, especially as temporal logic formulas. This approach allows for the use of formal verification techniques for controller synthesis that can give guarantees for safety and performance. Robots operating in unstructured environments also face limited sensing capability. Correctly inferring a robot's progress toward high level goal can be challenging.

This thesis develops new algorithms for synthesizing discrete controllers in partially known environments under specifications represented as linear temporal logic (LTL) formulas. It is inspired by recent developments in finite abstraction techniques for hybrid systems and motion planning problems. The robot and its environment is assumed to have a finite abstraction as a Partially Observable Markov Decision Process (POMDP), which is a powerful model class capable of representing a wide variety of problems. However, synthesizing controllers that satisfy LTL goals over POMDPs is a challenging problem which has received only limited attention.

This thesis proposes tractable, approximate algorithms for the control synthesis problem using Finite State Controllers (FSCs). The use of FSCs to control finite POMDPs allows for the closed system to be analyzed as finite global Markov chain. The thesis explicitly shows how transient and steady state behavior of the global Markov chains can be related to two different criteria with respect to satisfaction of LTL formulas. First, the maximization of the probability of LTL satisfaction is related to an optimization problem over a parametrization of the FSC. Analytic computation of gradients are derived which allows the use of first order optimization techniques.

The second criterion encourages rapid and frequent visits to a restricted set of states over infinite executions. It is formulated as a constrained optimization problem with a discounted long term reward objective by the novel utilization of a fundamental equation for Markov chains - the Poisson equation. A new constrained policy iteration technique is proposed to solve the resulting dynamic program, which also provides a way to escape local maxima.

The algorithms proposed in the thesis are applied to the task planning and execution challenges faced during the DARPA Autonomous Robotic Manipulation - Software challenge.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How powerful are Quantum Computers? Despite the prevailing belief that Quantum Computers are more powerful than their classical counterparts, this remains a conjecture backed by little formal evidence. Shor's famous factoring algorithm [Shor97] gives an example of a problem that can be solved efficiently on a quantum computer with no known efficient classical algorithm. Factoring, however, is unlikely to be NP-Hard, meaning that few unexpected formal consequences would arise, should such a classical algorithm be discovered. Could it then be the case that any quantum algorithm can be simulated efficiently classically? Likewise, could it be the case that Quantum Computers can quickly solve problems much harder than factoring? If so, where does this power come from, and what classical computational resources do we need to solve the hardest problems for which there exist efficient quantum algorithms?

We make progress toward understanding these questions through studying the relationship between classical nondeterminism and quantum computing. In particular, is there a problem that can be solved efficiently on a Quantum Computer that cannot be efficiently solved using nondeterminism? In this thesis we address this problem from the perspective of sampling problems. Namely, we give evidence that approximately sampling the Quantum Fourier Transform of an efficiently computable function, while easy quantumly, is hard for any classical machine in the Polynomial Time Hierarchy. In particular, we prove the existence of a class of distributions that can be sampled efficiently by a Quantum Computer, that likely cannot be approximately sampled in randomized polynomial time with an oracle for the Polynomial Time Hierarchy.

Our work complements and generalizes the evidence given in Aaronson and Arkhipov's work [AA2013] where a different distribution with the same computational properties was given. Our result is more general than theirs, but requires a more powerful quantum sampler.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.

In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.

This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.

The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.

The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Com a introdução do flúor como o principal agente anticariogênico e, talvez, um aumento do flúor na nossa cadeia alimentar, a fluorose dentária tornou-se um problema mundial. Os mecanismos que conduzem à formação do esmalte fluorótico são desconhecidos, mas devem envolver modificações nas reações físico-químicas básicas de desmineralização e remineralização do esmalte dentário. O aumento daquantidade de flúor no cristal apatita resulta no aumento dos parâmetros de rede. O objetivo deste trabalho é caracterizar o esmalte dentário humano saudável e fluorótico usando difração de raios X com luz síncrotron. Todos os perfis de espalhamento foram medidos na linha de difração de raios X (XRD1) do Laboratório Nacional de Luz Síncrotron, Campinas SP. Os experimentos foram realizados usando amostras em pó e em lâminas polidas. As amostras em pó foram analisadas a fim de obter a caracterização do esmalte dentário saudável. As lâminas foram analisadas em áreas do esmalte específicas identificadas como fluoróticas. Todos os perfis foram comparados com amostras de esmalte de controle e também com a literatura. A evidente similaridade entre os perfis de difração mostraram a analogia entre as estruturas do esmalte dentário e a hidroxiapatita padrão. Fica evidente que os perfis de difração do esmalte dentário das amostras em lâmina são diferentes daqueles obtidos para o esmalte em pó. As diferenças encontradas incluem variação na cristalinidade e orientação preferencial. Os valores encontrados para as distâncias interplanares para o esmalte de controle e fluorótico das amostras em lâmina não apresentaram diferenças estatisticamente significativas. Isto pode ser explicado pelo fato que a hidroxiapatita e a fluoropatita formam cristais com a mesma estrutura hexagonal, mesmo grupo de simetria e têm parâmetros de rede muito próximos, os quais a habilidade do sistema não foi suficiente para resolver. Finalmente, este trabalho mostra que a difração de raios X usando radiação síncrotron é uma técnica poderosa para o estudo da cristalografia e microestrutura do esmalte dentário e, ainda, pode ser igualmente aplicada no estudo de outros tecidos biológicos duros e de biomateriais sintéticos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A prevalência da obesidade e da síndrome metabólica (SM) vem aumentando dramaticamente em jovens e está se tornando um problema de saúde pública na maioria dos países desenvolvidos e em desenvolvimento. Tanto a obesidade quanto a SM aumentam o número de pacientes expostos ao risco de doença cardiovascular. Estudos recentes mostram que uma redução na biodisponipilidade de óxido nítrico (NO) é um dos principais fatores que contribui para a ação deletéria da insulina nos vasos de pacientes adultos com obesidade e SM. O NO, potente vasodilatador e anti-agregante plaquetário, tem como precursor o aminoácido catiônico L-arginina que é transportado para o interior das plaquetas através do carreador y+L. Uma família de enzimas denominadas NO sintases (NOS) catalisa a oxidação da L-arginina em NO e L-citrulina e é composta de três isoformas: neuronal (nNOS), induzível (iNOS) e endotelial (eNOS). Os objetivos principais do presente estudo são de investigar diferentes etapas da via L-arginina-NO em plaquetas associando agregação plaquetária, concentração plasmática de L-arginina, estresse oxidativo, marcadores metabólicos, hormonais, clínicos e inflamatórios em pacientes adolescentes com obesidade e SM. Foram incluídos no estudo trinta adolescentes, sendo dez com obesidade, dez com SM, e dez controles saudáveis pareados por idade, sexo e classificação de Tanner (controles: n= 10, 15.6 0.7 anos; obesos: n= 10, 15 0.9 anos; SM: n= 10, 14.9 0.8 anos). O transporte de L-arginina (pmol/109céls/min) através do sistema y+L estava diminuído nos pacientes com SM (18.4 3.8) e obesidade (20.8 4.7), comparados aos controles (52.3 14.8). Houve uma correlação positiva do influxo de L-arginina via sistema y+L com os níveis de HDL-Colesterol. Por outro lado, foi encontrada uma correlação negativa do influxo de L-arginina com os níveis de insulina, os índices Homa IR, relacionado a RI, Homa Beta, relacionado a função da célula beta e também com os índices de Leptina. Em relação a produção de NO, a obesidade e a SM não afetaram a atividade e expressão das enzimas NOS. A atividade da superóxido dismutase (SOD), através da mensuração da inibição da auto-oxidação da adrenalina, mostrou diferença significativa nas plaquetas de pacientes com obesidade (4235 613,2 nMol/mg de proteína), quando comparada aos controles (1011 123,6 nmol/mg de proteína) e SM (1713 267,7 nmol/mg de proteína). A nível sistêmico, foi também evidenciada uma ativação desta enzima anti-oxidante no soro de pacientes obesos, em relação aos controles. A peroxidação lipídica avaliada pelas substâncias reativas ao ácido tiobarbitúrico (TBARS) estava inalterada no soro dos pacientes e controles. Estes resultados sugerem que o transporte de L-arginina diminuído nas plaquetas de adolescentes obesos e com SM pode ser um marcador precoce de disfunção plaquetária. A alteração desta via correlaciona-se com a resistência à insulina e hiperinsulinemia. A contribuição deste estudo e de fatores que possam ser precocemente identificados pode diminuir o risco cardiovascular na vida adulta desta população de pacientes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EU]Proiektu honetan bihotz-biriketako berpizte masajearen bular-sakadek elektrokardiograman eta bular-inpedantziaren seinaleetan eragindako interferentziaren azterketa egiten da. Helburu nagusia bi interferentzia hauen arteko erlazioa aztertzea da, horretarako tresna garatuz. Erlazio hau definitzeak interferentziaren eragina txikitzeko modua aurkitzen lagunduko luke, eta honek berpizteko aukerak handituko lituzke. Proiektua gauzatzeko ospitalez kanpoko geldialdien erregistro multzo batetik abiatuta datu-base propioa garatu da ezarritako irizpide batzuk jarraituz. Datu-base berri hau 37 pazienteren 237 mozketak osatzen dute, 10 segundotako luzera minimoarekin non pazienteek asistolia bitarteko kanpoko bular masajea jasotzen duten. Bestalde, interferentzia ezaugarritzeko interfaze grafiko bat garatu da, elektrokardiograma eta bular-inpedantziaren seinaleak denboran eta maiztasunean erakutsi eta hauen parametro esanguratsuak automatikoki zein eskuz ateratzeko aukera ematen duena. Parametroak seinaleen sakada bakoitzeko maximo eta minimoak, beraien kokapenak eta oinarrizko maiztasuna, bere harmonikoak eta hauen anplitudeak dira. Tresna hau erabiliz aipatutako datu-baseko episodioen prozesaketa egin da. Bukatzeko, lortutako emaitzak tratatzeko bigarren interfaze grafiko bat garatu da, non emaitzen banaketa estatistikoa eta hauen arteko erlazio lineala aztertzen diren. Proiektuaren ekarpen nagusia, beraz, bihotz-biriketako berpizte masajeak eragindako interferentzia aztertzeko tresna ahaltsuaren garapena da, jatorri desberdineko bestelako berpizte episodioak aztertzeko ere balio duena.