965 resultados para objective techniques
Resumo:
Smartphones and other powerful sensor-equipped consumer devices make it possible to sense the physical world at an unprecedented scale. Nearly 2 million Android and iOS devices are activated every day, each carrying numerous sensors and a high-speed internet connection. Whereas traditional sensor networks have typically deployed a fixed number of devices to sense a particular phenomena, community networks can grow as additional participants choose to install apps and join the network. In principle, this allows networks of thousands or millions of sensors to be created quickly and at low cost. However, making reliable inferences about the world using so many community sensors involves several challenges, including scalability, data quality, mobility, and user privacy.
This thesis focuses on how learning at both the sensor- and network-level can provide scalable techniques for data collection and event detection. First, this thesis considers the abstract problem of distributed algorithms for data collection, and proposes a distributed, online approach to selecting which set of sensors should be queried. In addition to providing theoretical guarantees for submodular objective functions, the approach is also compatible with local rules or heuristics for detecting and transmitting potentially valuable observations. Next, the thesis presents a decentralized algorithm for spatial event detection, and describes its use detecting strong earthquakes within the Caltech Community Seismic Network. Despite the fact that strong earthquakes are rare and complex events, and that community sensors can be very noisy, our decentralized anomaly detection approach obtains theoretical guarantees for event detection performance while simultaneously limiting the rate of false alarms.
Resumo:
This thesis reports on a method to improve in vitro diagnostic assays that detect immune response, with specific application to HIV-1. The inherent polyclonal diversity of the humoral immune response was addressed by using sequential in situ click chemistry to develop a cocktail of peptide-based capture agents, the components of which were raised against different, representative anti-HIV antibodies that bind to a conserved epitope of the HIV-1 envelope protein gp41. The cocktail was used to detect anti-HIV-1 antibodies from a panel of sera collected from HIV-positive patients, with improved signal-to-noise ratio relative to the gold standard commercial recombinant protein antigen. The capture agents were stable when stored as a powder for two months at temperatures close to 60°C.
Resumo:
Modern robots are increasingly expected to function in uncertain and dynamically challenging environments, often in proximity with humans. In addition, wide scale adoption of robots requires on-the-fly adaptability of software for diverse application. These requirements strongly suggest the need to adopt formal representations of high level goals and safety specifications, especially as temporal logic formulas. This approach allows for the use of formal verification techniques for controller synthesis that can give guarantees for safety and performance. Robots operating in unstructured environments also face limited sensing capability. Correctly inferring a robot's progress toward high level goal can be challenging.
This thesis develops new algorithms for synthesizing discrete controllers in partially known environments under specifications represented as linear temporal logic (LTL) formulas. It is inspired by recent developments in finite abstraction techniques for hybrid systems and motion planning problems. The robot and its environment is assumed to have a finite abstraction as a Partially Observable Markov Decision Process (POMDP), which is a powerful model class capable of representing a wide variety of problems. However, synthesizing controllers that satisfy LTL goals over POMDPs is a challenging problem which has received only limited attention.
This thesis proposes tractable, approximate algorithms for the control synthesis problem using Finite State Controllers (FSCs). The use of FSCs to control finite POMDPs allows for the closed system to be analyzed as finite global Markov chain. The thesis explicitly shows how transient and steady state behavior of the global Markov chains can be related to two different criteria with respect to satisfaction of LTL formulas. First, the maximization of the probability of LTL satisfaction is related to an optimization problem over a parametrization of the FSC. Analytic computation of gradients are derived which allows the use of first order optimization techniques.
The second criterion encourages rapid and frequent visits to a restricted set of states over infinite executions. It is formulated as a constrained optimization problem with a discounted long term reward objective by the novel utilization of a fundamental equation for Markov chains - the Poisson equation. A new constrained policy iteration technique is proposed to solve the resulting dynamic program, which also provides a way to escape local maxima.
The algorithms proposed in the thesis are applied to the task planning and execution challenges faced during the DARPA Autonomous Robotic Manipulation - Software challenge.
Resumo:
In the 1994 Mw 6.7 Northridge and 1995 Mw 6.9 Kobe earthquakes, steel moment-frame buildings were exposed to an unexpected flaw. The commonly utilized welded unreinforced flange, bolted web connections were observed to experience brittle fractures in a number of buildings, even at low levels of seismic demand. A majority of these buildings have not been retrofitted and may be susceptible to structural collapse in a major earthquake.
This dissertation presents a case study of retrofitting a 20-story pre-Northridge steel moment-frame building. Twelve retrofit schemes are developed that present some range in degree of intervention. Three retrofitting techniques are considered: upgrading the brittle beam-to-column moment resisting connections, and implementing either conventional or buckling-restrained brace elements within the existing moment-frame bays. The retrofit schemes include some that are designed to the basic safety objective of ASCE-41 Seismic Rehabilitation of Existing Buildings.
Detailed finite element models of the base line building and the retrofit schemes are constructed. The models include considerations of brittle beam-to-column moment resisting connection fractures, column splice fractures, column baseplate fractures, accidental contributions from ``simple'' non-moment resisting beam-to-column connections to the lateral force-resisting system, and composite actions of beams with the overlying floor system. In addition, foundation interaction is included through nonlinear translational springs underneath basement columns.
To investigate the effectiveness of the retrofit schemes, the building models are analyzed under ground motions from three large magnitude simulated earthquakes that cause intense shaking in the greater Los Angeles metropolitan area, and under recorded ground motions from actual earthquakes. It is found that retrofit schemes that convert the existing moment-frames into braced-frames by implementing either conventional or buckling-restrained braces are effective in limiting structural damage and mitigating structural collapse. In the three simulated earthquakes, a 20% chance of simulated collapse is realized at PGV of around 0.6 m/s for the base line model, but at PGV of around 1.8 m/s for some of the retrofit schemes. However, conventional braces are observed to deteriorate rapidly. Hence, if a braced-frame that employs conventional braces survives a large earthquake, it is questionable how much service the braces provide in potential aftershocks.
Resumo:
The objective of this thesis is to develop a framework to conduct velocity resolved - scalar modeled (VR-SM) simulations, which will enable accurate simulations at higher Reynolds and Schmidt (Sc) numbers than are currently feasible. The framework established will serve as a first step to enable future simulation studies for practical applications. To achieve this goal, in-depth analyses of the physical, numerical, and modeling aspects related to Sc>>1 are presented, specifically when modeling in the viscous-convective subrange. Transport characteristics are scrutinized by examining scalar-velocity Fourier mode interactions in Direct Numerical Simulation (DNS) datasets and suggest that scalar modes in the viscous-convective subrange do not directly affect large-scale transport for high Sc. Further observations confirm that discretization errors inherent in numerical schemes can be sufficiently large to wipe out any meaningful contribution from subfilter models. This provides strong incentive to develop more effective numerical schemes to support high Sc simulations. To lower numerical dissipation while maintaining physically and mathematically appropriate scalar bounds during the convection step, a novel method of enforcing bounds is formulated, specifically for use with cubic Hermite polynomials. Boundedness of the scalar being transported is effected by applying derivative limiting techniques, and physically plausible single sub-cell extrema are allowed to exist to help minimize numerical dissipation. The proposed bounding algorithm results in significant performance gain in DNS of turbulent mixing layers and of homogeneous isotropic turbulence. Next, the combined physical/mathematical behavior of the subfilter scalar-flux vector is analyzed in homogeneous isotropic turbulence, by examining vector orientation in the strain-rate eigenframe. The results indicate no discernible dependence on the modeled scalar field, and lead to the identification of the tensor-diffusivity model as a good representation of the subfilter flux. Velocity resolved - scalar modeled simulations of homogeneous isotropic turbulence are conducted to confirm the behavior theorized in these a priori analyses, and suggest that the tensor-diffusivity model is ideal for use in the viscous-convective subrange. Simulations of a turbulent mixing layer are also discussed, with the partial objective of analyzing Schmidt number dependence of a variety of scalar statistics. Large-scale statistics are confirmed to be relatively independent of the Schmidt number for Sc>>1, which is explained by the dominance of subfilter dissipation over resolved molecular dissipation in the simulations. Overall, the VR-SM framework presented is quite effective in predicting large-scale transport characteristics of high Schmidt number scalars, however, it is determined that prediction of subfilter quantities would entail additional modeling intended specifically for this purpose. The VR-SM simulations presented in this thesis provide us with the opportunity to overlap with experimental studies, while at the same time creating an assortment of baseline datasets for future validation of LES models, thereby satisfying the objectives outlined for this work.
Resumo:
Ao longo do século XX, poucos estudos de dendrocronologia foram desenvolvidos com espécies de ambientes tropicais, em função da crença de que as condições climáticas nessas regiões não apresentavam variações suficientemente marcantes e regulares para induzir um ritmo anual de crescimento radial. A realização de trabalhos sobre esse tema nas últimas décadas revelou que a formação de anéis de crescimento anuais nos trópicos pode estar associada a fatores diversos, como: existência de estação seca bem definida, ocorrência de inundações sazonais, respostas ao comportamento fenológico, respostas ao fotoperíodo e a ritmos endógenos. O presente estudo tem por objetivo compreender a dinâmica de crescimento radial de uma espécie da Mata Atlântica se desenvolvendo em ambiente natural. Para tanto, propôs-se: i) investigar a periodicidade da atividade cambial e dos fatores que a influenciam; ii) estimar a idade e taxa de crescimento diamétrico e iii) correlacionar os fatores ambientais com os anéis de crescimento, em indivíduos de Cedrela odorata L. Para o estudo da atividade cambial, foram obtidas amostras de caule a 1,30 m do solo, contendo periderme, faixa cambial e xilema e floema secundários, por métodos não destrutivos. A fenologia vegetativa e a frutificação dos indivíduos amostrados foram acompanhadas durante todo o período do experimento. O material coletado foi processado segundo técnicas usuais em Anatomia Vegetal e analisado sob microscopia óptica e de fluorescência. Os dados de fotoperíodo, precipitação, temperatura e fenologia vegetativa foram correlacionados à atividade cambial. Para o estudo dos anéis de crescimento, as coletas também foram realizadas a 1,30 m do solo, por meio de sonda de Pressler. As amostras obtidas foram polidas e analisadas sob microscópio estereoscópio, para demarcação e aferição do número de anéis de crescimento, e a largura dos anéis foi mensurada para a determinação das taxas de crescimento radial. A série histórica de temperatura e precipitação foi correlacionada à cronologia dos anéis de crescimento. Os resultados indicaram que a atividade cambial segue um ritmo anual de crescimento, correlacionado à sazonalidade do fotoperíodo, da precipitação e da fenologia vegetativa. A análise dos anéis de crescimento permitiu estimar a idade dos indivíduos e determinar a taxa média de incremento e as taxas de incremento diamétrico acumulado e incremento médio anual para a espécie no sítio de estudo. Os dados de incremento radial evidenciaram a ausência de relação entre a idade e o diâmetro das árvores. A análise da variação na largura dos anéis não apresentou correlações significativas com os fatores climáticos analisados.
Resumo:
Over the past few decades, ferromagnetic spinwave resonance in magnetic thin films has been used as a tool for studying the properties of magnetic materials. A full understanding of the boundary conditions at the surface of the magnetic material is extremely important. Such an understanding has been the general objective of this thesis. The approach has been to investigate various hypotheses of the surface condition and to compare the results of these models with experimental data. The conclusion is that the boundary conditions are largely due to thin surface regions with magnetic properties different from the bulk. In the calculations these regions were usually approximated by uniform surface layers; the spins were otherwise unconstrained except by the same mechanisms that exist in the bulk (i.e., no special "pinning" at the surface atomic layer is assumed). The variation of the ferromagnetic spinwave resonance spectra in YIG films with frequency, temperature, annealing, and orientation of applied field provided an excellent experimental basis for the study.
This thesis can be divided into two parts. The first part is ferromagnetic resonance theory; the second part is the comparison of calculated with experimental data in YIG films. Both are essential in understanding the conclusion that surface regions with properties different from the bulk are responsible for the resonance phenomena associated with boundary conditions.
The theoretical calculations have been made by finding the wave vectors characteristic of the magnetic fields inside the magnetic medium, and then combining the fields associated with these wave vectors in superposition to match the specified boundary conditions. In addition to magnetic boundary conditions required for the surface layer model, two phenomenological magnetic boundary conditions are discussed in detail. The wave vectors are easily found by combining the Landau-Lifshitz equations with Maxwell's equations. Mode positions are most easily predicted from the magnetic wave vectors obtained by neglecting damping, conductivity, and the displacement current. For an insulator where the driving field is nearly uniform throughout the sample, these approximations permit a simple yet accurate calculation of the mode intensities. For metal films this calculation may be inaccurate but the mode positions are still accurately described. The techniques necessary for calculating the power absorbed by the film under a specific excitation including the effects of conductivity, displacement current and damping are also presented.
In the second part of the thesis the properties of magnetic garnet materials are summarized and the properties believed associated with the two surface regions of a YIG film are presented. Finally, the experimental data and calculated data for the surface layer model and other proposed models are compared. The conclusion of this study is that the remarkable variety of spinwave spectra that arises from various preparation techniques and subsequent treatments can be explained by surface regions with magnetic properties different from the bulk.
Resumo:
A dissertação trata do acesso aos serviços de alta complexidade, particularmente os exames diagnósticos e complementares, estudado entre usuários de planos de saúde privados que buscam atendimento e diagnóstico especializado. Desde a década de 80 o usuário do sistema público de saúde vem procurando a saúde suplementar. Contudo, afirmar que o acesso é garantido no domínio privado, através da contratação dos planos de saúde, é uma incerteza que rodeia a inspiração para esta pesquisa, que se justifica pela relevância de ações que possibilitem a melhora da qualidade regulatória dos planos de saúde, a partir do controle social de seus usuários. O objetivo geral é analisar as percepções do acesso aos exames de alta complexidade nos serviços de saúde privados entre usuários de planos de saúde. Os objetivos específicos são descrever as percepções dos usuários de planos de saúde acerca do acesso aos exames de alta complexidade; analisar as motivações dos usuários de planos de saúde privados para a realização de exames de alta complexidade através da rede privada de assistência; e analisar o nível de satisfação dos usuários de planos de saúde quanto ao acesso aos exames de alta complexidade. A metodologia é qualitativa-descritiva, onde a amostra foi de trinta usuários de planos de saúde, acima de 18 anos, selecionados no campo de estudo no ano de 2010. O cenário de estudo foi um laboratório privado de medicina diagnóstica no Rio de Janeiro. As técnicas de coleta de dados utilizadas foram formulário e entrevista individual estruturada. A análise do formulário foi realizada através de estatística descritiva, e as entrevistas através da análise de conteúdo temática-categorial. Os usuários de plano de saúde declararam que o acesso é garantido com facilidade para os exames de alta complexidade. Suas principais motivações para a realização desses exames na rede privada de assistência foram caracterizadas pela rapidez de atendimento, flexibilidade e facilidade de marcação pela internet, telefone ou pessoalmente no laboratório estudado, pronta entrega dos resultados, dificuldade e morosidade do atendimento do SUS, localização do prestador credenciado próxima de bairros residenciais ou do trabalho, resolutividade diagnóstica de imagem de excelência, possibilidade de escolha pelo usuário entre as modalidades aberta e fechada de ressonância magnética e tomografia computadorizada, além da densitometria óssea que foram facilmente acessíveis a todos os sujeitos da pesquisa. O nível de satisfação foi correspondido com a rapidez na realização dos exames em caráter eletivo e de urgência quase equiparados na escala de tempo de acordo com os usuários. Contudo, embora as notas de avaliação dos usuários quanto aos seus planos de saúde tenham sido altas, foram abordadas algumas dificuldades, tais como: prazos de validade dos pedidos médicos com datação prévia; solicitações de senhas de autorização pela operadora; burocracia nos procedimentos de agendamento; dificuldades de acesso para tratamentos como implantes, fisioterapia, RPG, pilates, home care, consultas de check up; negação de reembolsos; restrição de materiais cirúrgicos, em especial as próteses e órteses; e restrições específicas de grau para cirurgias de miopia. Conclui-se que o atendimento rápido dos exames de imagem de alto custo na amostra foi descrito como satisfatório, embora a percepção de rapidez possa variar em função do tipo de produto do plano de saúde privado contratado, com necessidade de melhoria regulatória em alguns aspectos pontuais da saúde suplementar.
Resumo:
[ES]El trabajo fin grado desarrollado en este documento consiste en la realización de una interfaz gráfica que permita analizar la precisión, en la medida de armónicos e interarmónicos de señales eléctricas de tensión y corriente, de diferentes técnicas que buscan la sincronización de la frecuencia de muestreo con la frecuencia fundamental. Se estudian diferentes técnicas de estimación de la frecuencia fundamental y diferentes técnicas de remuestreo aplicadas a señales analíticas de las que se conocen su frecuencia fundamental y su contenido armónico. Estas técnicas de procesado tienen como objetivo mejorar en la medida del contenido armónico haciendo disminuir, mediante la sincronización de la frecuencia de muestreo, el error que se comete debido a la dispersión espectral provocada por el enventanado de las señales.
Resumo:
There is a growing interest in taking advantage of possible patterns and structures in data so as to extract the desired information and overcome the curse of dimensionality. In a wide range of applications, including computer vision, machine learning, medical imaging, and social networks, the signal that gives rise to the observations can be modeled to be approximately sparse and exploiting this fact can be very beneficial. This has led to an immense interest in the problem of efficiently reconstructing a sparse signal from limited linear observations. More recently, low-rank approximation techniques have become prominent tools to approach problems arising in machine learning, system identification and quantum tomography.
In sparse and low-rank estimation problems, the challenge is the inherent intractability of the objective function, and one needs efficient methods to capture the low-dimensionality of these models. Convex optimization is often a promising tool to attack such problems. An intractable problem with a combinatorial objective can often be "relaxed" to obtain a tractable but almost as powerful convex optimization problem. This dissertation studies convex optimization techniques that can take advantage of low-dimensional representations of the underlying high-dimensional data. We provide provable guarantees that ensure that the proposed algorithms will succeed under reasonable conditions, and answer questions of the following flavor:
- For a given number of measurements, can we reliably estimate the true signal?
- If so, how good is the reconstruction as a function of the model parameters?
More specifically, i) Focusing on linear inverse problems, we generalize the classical error bounds known for the least-squares technique to the lasso formulation, which incorporates the signal model. ii) We show that intuitive convex approaches do not perform as well as expected when it comes to signals that have multiple low-dimensional structures simultaneously. iii) Finally, we propose convex relaxations for the graph clustering problem and give sharp performance guarantees for a family of graphs arising from the so-called stochastic block model. We pay particular attention to the following aspects. For i) and ii), we aim to provide a general geometric framework, in which the results on sparse and low-rank estimation can be obtained as special cases. For i) and iii), we investigate the precise performance characterization, which yields the right constants in our bounds and the true dependence between the problem parameters.
Resumo:
[ES]Hoy en día las muertes por parada cardiorrespiratoria superan en número a otras más mediáticas como aquellas que se producen por incendios o en accidentes de tráfico, y sin embargo su repercusión es mucho menor. Este hecho debe ser motivo de preocupación ya que, con una correcta formación de la población en materia de resucitación cardíaca, muchas de estas muertes podrían ser evitadas. Con el objetivo de reducir estas estadísticas han surgido multitud de estudios y proyectos de investigación consistentes en tratar de mejorar las herramientas disponibles tanto para personal sanitario como no sanitario. En este marco se encuadra el proyecto presentado en este documento, consistente en la sensorización de un maniquí de entrenamiento para episodios de parada cardiorrespiratoria, el cual ofrecerá la posibilidad de analizar con detalle el artifact o interferencia generada por el rescatador sobre el paciente en el momento de efectuar la maniobra de resucitación, así como la interferencia causada por el contacto electrodo-piel. Paralelamente podrá ser utilizado como mero instrumento de entrenamiento para posibles situaciones reales. El porqué de la utilización de este tipo de maniquíes reside principalmente en la imposibilidad de emplear personas debido a las posibles lesiones torácicas que pueden ocurrir por las compresiones realizadas. Finalmente debe citarse el hecho de que no es imprescindible tener conocimientos médicos para poder aplicar las técnicas básicas de resucitación cardíaca, acción que incrementa las posibilidades de supervivencia de un paciente de manera excepcional, ya que cada minuto que pasa desde la parada cardiorrespiratoria la probabilidad de supervivencia disminuye en un porcentaje significativamente elevado. Tomando como base lo descrito hasta ahora, en este documento se detalla la solución técnica de la sensorización de un maniquí genérico para la adquisición de las señales de fuerza de compresión, aceleración sufrida por el pecho en tres ejes ortogonales, profundidad de compresión, impedancia entre los dos electrodos colocados sobre el pecho del paciente y señal electrocardiográfica emitida por el corazón; además, se incluye la posibilidad de inyectar una señal electrocardiográfica previamente grabada. La base de registros obtenida de estos ensayos podrá ser utilizada posteriormente para su análisis, ya que su similitud con señales extraídas en un caso real es máxima.
Resumo:
An instrument, the Caltech High Energy Isotope Spectrometer Telescope (HEIST), has been developed to measure isotopic abundances of cosmic ray nuclei in the charge range 3 ≤ Z ≤ 28 and the energy range between 30 and 800 MeV/nuc by employing an energy loss -- residual energy technique. Measurements of particle trajectories and energy losses are made using a multiwire proportional counter hodoscope and a stack of CsI(TI) crystal scintillators, respectively. A detailed analysis has been made of the mass resolution capabilities of this instrument.
Landau fluctuations set a fundamental limit on the attainable mass resolution, which for this instrument ranges between ~.07 AMU for z~3 and ~.2 AMU for z~2b. Contributions to the mass resolution due to uncertainties in measuring the path-length and energy losses of the detected particles are shown to degrade the overall mass resolution to between ~.1 AMU (z~3) and ~.3 AMU (z~2b).
A formalism, based on the leaky box model of cosmic ray propagation, is developed for obtaining isotopic abundance ratios at the cosmic ray sources from abundances measured in local interstellar space for elements having three or more stable isotopes, one of which is believed to be absent at the cosmic ray sources. This purely secondary isotope is used as a tracer of secondary production during propagation. This technique is illustrated for the isotopes of the elements O, Ne, S, Ar and Ca.
The uncertainties in the derived source ratios due to errors in fragmentation and total inelastic cross sections, in observed spectral shapes, and in measured abundances are evaluated. It is shown that the dominant sources of uncertainty are uncorrelated errors in the fragmentation cross sections and statistical uncertainties in measuring local interstellar abundances.
These results are applied to estimate the extent to which uncertainties must be reduced in order to distinguish between cosmic ray production in a solar-like environment and in various environments with greater neutron enrichments.