902 resultados para Uncertainty in governance


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Part I

Particles are a key feature of planetary atmospheres. On Earth they represent the greatest source of uncertainty in the global energy budget. This uncertainty can be addressed by making more measurement, by improving the theoretical analysis of measurements, and by better modeling basic particle nucleation and initial particle growth within an atmosphere. This work will focus on the latter two methods of improvement.

Uncertainty in measurements is largely due to particle charging. Accurate descriptions of particle charging are challenging because one deals with particles in a gas as opposed to a vacuum, so different length scales come into play. Previous studies have considered the effects of transition between the continuum and kinetic regime and the effects of two and three body interactions within the kinetic regime. These studies, however, use questionable assumptions about the charging process which resulted in skewed observations, and bias in the proposed dynamics of aerosol particles. These assumptions affect both the ions and particles in the system. Ions are assumed to be point monopoles that have a single characteristic speed rather than follow a distribution. Particles are assumed to be perfect conductors that have up to five elementary charges on them. The effects of three body interaction, ion-molecule-particle, are also overestimated. By revising this theory so that the basic physical attributes of both ions and particles and their interactions are better represented, we are able to make more accurate predictions of particle charging in both the kinetic and continuum regimes.

The same revised theory that was used above to model ion charging can also be applied to the flux of neutral vapor phase molecules to a particle or initial cluster. Using these results we can model the vapor flux to a neutral or charged particle due to diffusion and electromagnetic interactions. In many classical theories currently applied to these models, the finite size of the molecule and the electromagnetic interaction between the molecule and particle, especially for the neutral particle case, are completely ignored, or, as is often the case for a permanent dipole vapor species, strongly underestimated. Comparing our model to these classical models we determine an “enhancement factor” to characterize how important the addition of these physical parameters and processes is to the understanding of particle nucleation and growth.

Part II

Whispering gallery mode (WGM) optical biosensors are capable of extraordinarily sensitive specific and non-specific detection of species suspended in a gas or fluid. Recent experimental results suggest that these devices may attain single-molecule sensitivity to protein solutions in the form of stepwise shifts in their resonance wavelength, \lambda_{R}, but present sensor models predict much smaller steps than were reported. This study examines the physical interaction between a WGM sensor and a molecule adsorbed to its surface, exploring assumptions made in previous efforts to model WGM sensor behavior, and describing computational schemes that model the experiments for which single protein sensitivity was reported. The resulting model is used to simulate sensor performance, within constraints imposed by the limited material property data. On this basis, we conclude that nonlinear optical effects would be needed to attain the reported sensitivity, and that, in the experiments for which extreme sensitivity was reported, a bound protein experiences optical energy fluxes too high for such effects to be ignored.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The model dependence inherent in hadronic calculations is one of the dominant sources of uncertainty in the theoretical prediction of the anomalous magnetic moment of the muon. In this thesis, we focus on the charged pion contribution and turn a critical eye on the models employed in the few previous calculations of $a_\mu^{\pi^+\pi^-}$. Chiral perturbation theory provides a check on these models at low energies, and we therefore calculate the charged pion contribution to light-by-light (LBL) scattering to $\mathcal{O}(p^6)$. We show that the dominant corrections to the leading order (LO) result come from two low energy constants which show up in the form factors for the $\gamma\pi\pi$ and $\gamma\gamma\pi\pi$ vertices. Comparison with the existing models reveal a potentially significant omission - none include the pion polarizability corrections associated with the $\gamma\gamma\pi\pi$ vertex. We next consider alternative models where the pion polarizability is produced through exchange of the $a_1$ axial vector meson. These have poor UV behavior, however, making them unsuited for the $a_\mu^{\pi^+\pi^-}$ calculation. We turn to a simpler form factor modeling approach, generating two distinct models which reproduce the pion polarizability corrections at low energies, have the correct QCD scaling at high energies, and generate finite contributions to $a_\mu^{\pi^+\pi^-}$. With these two models, we calculate the charged pion contribution to the anomalous magnetic moment of the muon, finding values larger than those previously reported: $a_\mu^\mathrm{I} = -1.779(4)\times10^{-10}\,,\,a_\mu^\mathrm{II} = -4.892(3)\times10^{-10}$.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, the development of a probabilistic approach to robust control is motivated by structural control applications in civil engineering. Often in civil structural applications, a system's performance is specified in terms of its reliability. In addition, the model and input uncertainty for the system may be described most appropriately using probabilistic or "soft" bounds on the model and input sets. The probabilistic robust control methodology contrasts with existing H∞/μ robust control methodologies that do not use probability information for the model and input uncertainty sets, yielding only the guaranteed (i.e., "worst-case") system performance, and no information about the system's probable performance which would be of interest to civil engineers.

The design objective for the probabilistic robust controller is to maximize the reliability of the uncertain structure/controller system for a probabilistically-described uncertain excitation. The robust performance is computed for a set of possible models by weighting the conditional performance probability for a particular model by the probability of that model, then integrating over the set of possible models. This integration is accomplished efficiently using an asymptotic approximation. The probable performance can be optimized numerically over the class of allowable controllers to find the optimal controller. Also, if structural response data becomes available from a controlled structure, its probable performance can easily be updated using Bayes's Theorem to update the probability distribution over the set of possible models. An updated optimal controller can then be produced, if desired, by following the original procedure. Thus, the probabilistic framework integrates system identification and robust control in a natural manner.

The probabilistic robust control methodology is applied to two systems in this thesis. The first is a high-fidelity computer model of a benchmark structural control laboratory experiment. For this application, uncertainty in the input model only is considered. The probabilistic control design minimizes the failure probability of the benchmark system while remaining robust with respect to the input model uncertainty. The performance of an optimal low-order controller compares favorably with higher-order controllers for the same benchmark system which are based on other approaches. The second application is to the Caltech Flexible Structure, which is a light-weight aluminum truss structure actuated by three voice coil actuators. A controller is designed to minimize the failure probability for a nominal model of this system. Furthermore, the method for updating the model-based performance calculation given new response data from the system is illustrated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents a technique for obtaining the response of linear structural systems with parameter uncertainties subjected to either deterministic or random excitation. The parameter uncertainties are modeled as random variables or random fields, and are assumed to be time-independent. The new method is an extension of the deterministic finite element method to the space of random functions.

First, the general formulation of the method is developed, in the case where the excitation is deterministic in time. Next, the application of this formulation to systems satisfying the one-dimensional wave equation with uncertainty in their physical properties is described. A particular physical conceptualization of this equation is chosen for study, and some engineering applications are discussed in both an earthquake ground motion and a structural context.

Finally, the formulation of the new method is extended to include cases where the excitation is random in time. Application of this formulation to the random response of a primary-secondary system is described. It is found that parameter uncertainties can have a strong effect on the system response characteristics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Chapter I

Theories for organic donor-acceptor (DA) complexes in solution and in the solid state are reviewed, and compared with the available experimental data. As shown by McConnell et al. (Proc. Natl. Acad. Sci. U.S., 53, 46-50 (1965)), the DA crystals fall into two classes, the holoionic class with a fully or almost fully ionic ground state, and the nonionic class with little or no ionic character. If the total lattice binding energy 2ε1 (per DA pair) gained in ionizing a DA lattice exceeds the cost 2εo of ionizing each DA pair, ε1 + εo less than 0, then the lattice is holoionic. The charge-transfer (CT) band in crystals and in solution can be explained, following Mulliken, by a second-order mixing of states, or by any theory that makes the CT transition strongly allowed, and yet due to a small change in the ground state of the non-interacting components D and A (or D+ and A-). The magnetic properties of the DA crystals are discussed.

Chapter II

A computer program, EWALD, was written to calculate by the Ewald fast-convergence method the crystal Coulomb binding energy EC due to classical monopole-monopole interactions for crystals of any symmetry. The precision of EC values obtained is high: the uncertainties, estimated by the effect on EC of changing the Ewald convergence parameter η, ranged from ± 0.00002 eV to ± 0.01 eV in the worst case. The charge distribution for organic ions was idealized as fractional point charges localized at the crystallographic atomic positions: these charges were chosen from available theoretical and experimental estimates. The uncertainty in EC due to different charge distribution models is typically ± 0.1 eV (± 3%): thus, even the simple Hückel model can give decent results.

EC for Wurster's Blue Perchl orate is -4.1 eV/molecule: the crystal is stable under the binding provided by direct Coulomb interactions. EC for N-Methylphenazinium Tetracyanoquino- dimethanide is 0.1 eV: exchange Coulomb interactions, which cannot be estimated classically, must provide the necessary binding.

EWALD was also used to test the McConnell classification of DA crystals. For the holoionic (1:1)-(N,N,N',N'-Tetramethyl-para- phenylenediamine: 7,7,8,8-Tetracyanoquinodimethan) EC = -4.0 eV while 2εo = 4.65 eV: clearly, exchange forces must provide the balance. For the holoionic (1:1)-(N,N,N',N'-Tetramethyl-para- phenylenediamine:para-Chloranil) EC = -4.4 eV, while 2εo = 5.0 eV: again EC falls short of 2ε1. As a Gedankenexperiment, two nonionic crystals were assumed to be ionized: for (1:1)-(Hexamethyl- benzene:para-Chloranil) EC = -4.5 eV, 2εo = 6.6 eV; for (1:1)- (Napthalene:Tetracyanoethylene) EC = -4.3 eV, 2εo = 6.5 eV. Thus, exchange energies in these nonionic crystals must not exceed 1 eV.

Chapter III

A rapid-convergence quantum-mechanical formalism is derived to calculate the electronic energy of an arbitrary molecular (or molecular-ion) crystal: this provides estimates of crystal binding energies which include the exchange Coulomb inter- actions. Previously obtained LCAO-MO wavefunctions for the isolated molecule(s) ("unit cell spin-orbitals") provide the starting-point. Bloch's theorem is used to construct "crystal spin-orbitals". Overlap between the unit cell orbitals localized in different unit cells is neglected, or is eliminated by Löwdin orthogonalization. Then simple formulas for the total kinetic energy Q^(XT)_λ, nuclear attraction [λ/λ]XT, direct Coulomb [λλ/λ'λ']XT and exchange Coulomb [λλ'/λ'λ]XT integrals are obtained, and direct-space brute-force expansions in atomic wavefunctions are given. Fourier series are obtained for [λ/λ]XT, [λλ/λ'λ']XT, and [λλ/λ'λ]XT with the help of the convolution theorem; the Fourier coefficients require the evaluation of Silverstone's two-center Fourier transform integrals. If the short-range interactions are calculated by brute-force integrations in direct space, and the long-range effects are summed in Fourier space, then rapid convergence is possible for [λ/λ]XT, [λλ/λ'λ']XT and [λλ'/λ'λ]XT. This is achieved, as in the Ewald method, by modifying each atomic wavefunction by a "Gaussian convergence acceleration factor", and evaluating separately in direct and in Fourier space appropriate portions of [λ/λ]XT, etc., where some of the portions contain the Gaussian factor.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.

Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recently, the amino acid sequences have been reported for several proteins, including the envelope glycoproteins of Sindbis virus, which all probably span the plasma membrane with a common topology: a large N-terminal, extracellular portion, a short region buried in the bilayer, and a short C-terminal intracellular segment. The regions of these proteins buried in the bilayer correspond to portions of the protein sequences which contain a stretch of hydrophobic amino acids and which have other common characteristics, as discussed. Reasons are also described for uncertainty, in some proteins more than others, as to the precise location of some parts of the sequence relative to the membrane.

The signal hypothesis for the transmembrane translocation of proteins is briefly described and its general applicability is reviewed. There are many proteins whose translocation is accurately described by this hypothesis, but some proteins are translocated in a different manner.

The transmembraneous glycoproteins E1 and E2 of Sindbis virus, as well as the only other virion protein, the capsid protein, were purified in amounts sufficient for biochemical analysis using sensitive techniques. The amino acid composition of each protein was determined, and extensive N-terminal sequences were obtained for E1 and E2. By these techniques E1 and E2 are indistinguishable from most water soluble proteins, as they do not contain an obvious excess of hydrophobic amino acids in their N-terminal regions or in the intact molecule.

The capsid protein was found to be blocked, and so its N-terminus could not be sequenced by the usual methods. However, with the use of a special labeling technique, it was possible to incorporate tritiated acetate into the N-terminus of the protein with good specificity, which was useful in the purification of peptides from which the first amino acids in the N-terminal sequence could be identified.

Nanomole amounts of PE2, the intracellular precursor of E2, were purified by an immuno-affinity technique, and its N-terminus was analyzed. Together with other work, these results showed that PE2 is not synthesized with an N-terminal extension, and the signal sequence for translocation is probably the N-terminal amino acid sequence of the protein. This N-terminus was found to be 80-90% blocked, also by Nacetylation, and this acetylation did not affect its function as a signal sequence. The putative signal sequence was also found to contain a glycosylated asparagine residue, but the inhibition of this glycosylation did not lead to the cleavage of the sequence.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The propagation of cosmic rays through interstellar space has been investigated with the view of determining what particles can traverse astronomical distances without serious loss of energy. The principal method of loss of energy of high energy particles is by interaction with radiation. It is found that high energy (1013-1018ev) electrons drop to one-tenth their energy in 108 light years in the radiation density in the galaxy and that protons are not significantly affected in this distance. The origin of the cosmic rays is not known so that various hypotheses as to their origin are examined. If the source is near a star it is found that the interaction of electrons and photons with the stellar radiation field and the interaction of electrons with the stellar magnetic field limit the amount of energy which these particles can carry away from the star. However, the interaction is not strong enough to affect the energy of protons or light nuclei appreciably. The chief uncertainty in the results is due to the possible existence of general galactic magnetic field. The main conclusion reached is that if there is a general galactic magnetic field, then the primary spectrum has very few photons, only low energy (˂ 1013 ev) electrons and the higher energy particles are primarily protons regardless of the source mechanism, and if there is no general galactic magnetic field, then the source of cosmic rays accelerates mainly protons and the present rate of production is much less than that in the past.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A large array has been used to investigate the P-wave velocity structure of the lower mantle. Linear array processing methods are reviewed and a method of nonlinear processing is presented. Phase velocities, travel times, and relative amplitudes of P waves have been measured with the large array at the Tonto Forest Seismological Observatory in Arizona for 125 earthquakes in the distance range of 30 to 100 degrees. Various models are assumed for the upper 771 km of the mantle and the Wiechert-Herglotz method applied to the phase velocity data to obtain a velocity depth structure for the lower mantle. The phase velocity data indicates the presence of a second-order discontinuity at a depth of 840 km, another at 1150 km, and less pronounced discontinuities at 1320, 1700 and 1950 km. Phase velocities beyond 85 degrees are interpreted in terms of a triplication of the phase velocity curve, and this results in a zone of almost constant velocity between depths of 2670 and 2800 km. Because of the uncertainty in the upper mantle assumptions, a final model cannot be proposed, but it appears that the lower mantle is more complicated than the standard models and there is good evidence for second-order discontinuities below a depth of 1000 km. A tentative lower bound of 2881 km can be placed on the depth to the core. The importance of checking the calculated velocity structure against independently measured travel times is pointed out. Comparisons are also made with observed PcP times and the agreement is good. The method of using measured values of the rate of change of amplitude with distances shows promising results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La salud es un aspecto muy importante en la vida de cualquier persona, de forma que, al ocurrir cualquier contingencia que merma el estado de salud de un individuo o grupo de personas, se debe valorar estrictamente y en detalle las distintas alternativas destinadas a combatir la enfermedad. Esto se debe a que, la calidad de vida de los pacientes variará dependiendo de la alternativa elegida. La calidad de vida relacionada con la salud (CVRS) se entiende como el valor asignado a la duración de la vida, modificado por la oportunidad social, la percepción, el estado funcional y la disminución provocadas por una enfermedad, accidente, tratamiento o política (Sacristán et al, 1995). Para determinar el valor numérico asignado a la CVRS, ante una intervención, debemos beber de la teoría económica aplicada a las evaluaciones sanitarias para nuevas intervenciones. Entre los métodos de evaluación económica sanitaria, el método coste-utilidad emplea como utilidad, los años de vida ajustado por calidad (AVAC), que consiste, por un lado, tener en cuenta la calidad de vida ante una intervención médica, y por otro lado, los años estimados a vivir tras la intervención. Para determinar la calidad de vida, se emplea técnicas como el Juego Estándar, la Equivalencia Temporal y la Escala de Categoría. Estas técnicas nos proporcionan un valor numérico entre 0 y 1, siendo 0 el peor estado y 1 el estado perfecto de salud. Al entrevistar a un paciente a cerca de la utilidad en términos de salud, puede haber riesgo o incertidumbre en la pregunta planteada. En tal caso, se aplica el Juego Estándar con el fin de determinar el valor numérico de la utilidad o calidad de vida del paciente ante un tratamiento dado. Para obtener este valor, al paciente se le plantean dos escenarios: en primer lugar, un estado de salud con probabilidad de morir y de sobrevivir, y en segundo lugar, un estado de certeza. La utilidad se determina modificando la probabilidad de morir hasta llegar a la probabilidad que muestra la indiferencia del individuo entre el estado de riesgo y el estado de certeza. De forma similar, tenemos la equivalencia temporal, cuya aplicación resulta más fácil que el juego estándar ya que valora en un eje de ordenadas y abscisas, el valor de la salud y el tiempo a cumplir en esa situación ante un tratamiento sanitario, de forma que, se llega al valor correspondiente a la calidad de vida variando el tiempo hasta que el individuo se muestre indiferente entre las dos alternativas. En último lugar, si lo que se espera del paciente es una lista de estados de salud preferidos ante un tratamiento, empleamos la Escala de Categoría, que consiste en una línea horizontal de 10 centímetros con puntuaciones desde 0 a 100. La persona entrevistada coloca la lista de estados de salud según el orden de preferencia en la escala que después es normalizado a un intervalo entre 0 y 1. Los años de vida ajustado por calidad se obtienen multiplicando el valor de la calidad de vida por los años de vida estimados que vivirá el paciente. Sin embargo, ninguno de estas metodologías mencionadas consideran el factor edad, siendo necesario la inclusión de esta variable. Además, los pacientes pueden responder de manera subjetiva, situación en la que se requiere la opinión de un experto que determine el nivel de discapacidad del aquejado. De esta forma, se introduce el concepto de años de vida ajustado por discapacidad (AVAD) tal que el parámetro de utilidad de los AVAC será el complementario del parámetro de discapacidad de los AVAD Q^i=1-D^i. A pesar de que este último incorpora parámetros de ponderación de edad que no se contemplan en los AVAC. Además, bajo la suposición Q=1-D, podemos determinar la calidad de vida del individuo antes del tratamiento. Una vez obtenido los AVAC ganados, procedemos a la valoración monetaria de éstos. Para ello, partimos de la suposición de que la intervención sanitaria permite al individuo volver a realizar las labores que venía realizando. De modo que valoramos los salarios probables con una temporalidad igual a los AVAC ganados, teniendo en cuenta la limitación que supone la aplicación de este enfoque. Finalmente, analizamos los beneficios derivados del tratamiento (masa salarial probable) si empleamos la tabla GRF-95 (población femenina) y GRM-95 (población masculina).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We address the valuation of an operating wind farm and the finite-lived option to invest in it under different reward/support schemes: a constant feed-in tariff, a premium on top of the electricity market price (either a fixed premium or a variable subsidy such as a renewable obligation certificate or ROC), and a transitory subsidy, among others. Futures contracts on electricity with ever longer maturities enable market-based valuations to be undertaken. The model considers up to three sources of uncertainty: the electricity price, the level of wind generation, and the certificate (ROC) price where appropriate. When analytical solutions are lacking, we resort to a trinomial lattice combined with Monte Carlo simulation; we also use a two-dimensional binomial lattice when uncertainty in the ROC price is considered. Our data set refers to the UK. The numerical results show the impact of several factors involved in the decision to invest: the subsidy per MWh generated, the initial lump-sum subsidy, the maturity of the investment option, and electricity price volatility. Different combinations of variables can help bring forward investments in wind generation. One-off policies, e.g., a transitory initial subsidy, seem to have a stronger effect than a fixed premium per MWh produced.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nos diversos segmentos da Geotecnia e em especial na área de fundações, o engenheiro se depara com uma série de incertezas. Algumas destas incertezas são inerentes à variabilidade local do solo, às condições de carregamento, aos efeitos do tempo, às diferenças nos processos executivos, erros de sondagens, que influenciam diretamente a estimativa da capacidade de carga da fundação, seja por ocasião de seu carregamento estático, seja durante ou logo após a cravação. O objetivo desta dissertação é a adaptação, a estacas em terra (onshore), de um procedimento concebido originalmente para emprego em estacas offshore, que trata da atualização da estimativa da resistência durante a cravação, com base em registros documentados durante a execução. Neste procedimento a atualização é feita através da aplicação dos conceitos da análise Bayesiana, assumindo que os parâmetros da distribuição probabilística utilizada sejam variáveis randômicas. A incerteza dos parâmetros é modelada por distribuições a priori e a posteriori. A distribuição a posteriori é calculada pela atualização da distribuição a priori, utilizando uma função de máxima verossimilhança, que contém a observação obtida dos registros de cravação. O procedimento é aplicado a um conjunto de estacas de um extenso estaqueamento executado na Zona Oeste do Rio de Janeiro. As estimativas atualizadas são posteriormente comparadas aos resultados dos ensaios de carregamento dinâmico. Várias aplicações podem surgir com o emprego deste procedimento, como a seleção das estacas que, por apresentarem reduzido valor de estimativa atualizada de resistência, ou uma maior incerteza desta estimativa, devam ser submetidas a provas de carga. A extensão deste estudo a diferentes tipos de estacas em perfis de solo de natureza distintos poderá levar ao desenvolvimento de sistemas mais adequados de controle de execução, capazes de identificar as principais incertezas presentes nos diferentes tipos de execução de estacas, contribuindo assim para a otimização de futuros projetos de fundações.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esta pesquisa tem como objetivo documentar o processo de redução de riscos e incertezas de um jogo eletrônico em desenvolvimento por meio da aplicação de métodos de avaliação de Usabilidade. Foi realizado um estudo de caso da utilização de métodos e técnicas de avaliação de Usabilidade durante a produção do jogo eletrônico Dungeonland, conduzido entre 2010 a 2013 ao longo de diversas iterações do produto, da pré-produção ao lançamento. Foram utilizados os métodos de observação direta baseada em problemas, avaliação cooperativa, questionário e entrevista semi-estruturada. Os dados coletados demonstram a evolução do design do jogo, as diferentes metodologias empregadas em cada estágio de desenvolvimento, e o impacto da avaliação no projeto. Apesar de problemas e limitações no emprego dos testes de Usabilidade no produto em questão, o impacto da avaliação foi visto como muito grande e muito positivo pelos desenvolvedores - através de dados qualitativos como protocolos verbais e de gameplay de usuários, e de dados quantitativos sobre suas experiências com o produto que possam ser comparados estatisticamente, os desenvolvedores de jogos têm à sua disposição poderosas ferramentas para estabelecer processos de Design claros, centrados no usuário, e que ofereçam um ambiente onde problemas são rapidamente identificados e soluções são validadas com usuários reais.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper is aimed at designing a robust vaccination strategy capable of eradicating an infectious disease from a population regardless of the potential uncertainty in the parameters defining the disease. For this purpose, a control theoretic approach based on a sliding-mode control law is used. Initially, the controller is designed assuming certain knowledge of an upper-bound of the uncertainty signal. Afterwards, this condition is removed while an adaptive sliding control system is designed. The closed-loop properties are proved mathematically in the nonadaptive and adaptive cases. Furthermore, the usual sign function appearing in the sliding-mode control is substituted by the saturation function in order to prevent chattering. In addition, the properties achieved by the closed-loop system under this variation are also stated and proved analytically. The closed-loop system is able to attain the control objective regardless of the parametric uncertainties of the model and the lack of a priori knowledge on the system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O aumento exponencial dos gastos em saúde demanda estudos econômicos que subsidiem as decisões de agentes públicos ou privados quanto à incorporação de novas tecnologias aos sistemas de saúde. A tomografia de emissão de pósitrons (PET) é uma tecnologia de imagem da área de medicina nuclear, de alto custo e difusão ainda recente no país. O nível de evidência científica acumulada em relação a seu uso no câncer pulmonar de células não pequenas (CPCNP) é significativo, com a tecnologia mostrando acurácia superior às técnicas de imagem convencionais no estadiamento mediastinal e à distância. Avaliação econômica realizada em 2013 aponta para seu custo-efetividade no estadiamento do CPCNP em comparação à estratégia atual de manejo baseada no uso da tomografia computadorizada, na perspectiva do SUS. Sua incorporação ao rol de procedimentos disponibilizados pelo SUS pelo Ministério da Saúde (MS) ocorreu em abril de 2014, mas ainda se desconhecem os impactos econômico-financeiros decorrentes desta decisão. Este estudo buscou estimar o impacto orçamentário (IO) da incorporação da tecnologia PET no estadiamento do CPCNP para os anos de 2014 a 2018, a partir da perspectiva do SUS como financiador da assistência à saúde. As estimativas foram calculadas pelo método epidemiológico e usaram como base modelo de decisão do estudo de custo-efetividade previamente realizado. Foram utilizados dados nacionais de incidência; de distribuição de doença e acurácia das tecnologias procedentes da literatura e de custos, de estudo de microcustos e das bases de dados do SUS. Duas estratégias de uso da nova tecnologia foram analisadas: (a) oferta da PET-TC a todos os pacientes; e (b) oferta restrita àqueles que apresentem resultados de TC prévia negativos. Adicionalmente, foram realizadas análises de sensibilidade univariadas e por cenários extremos, para avaliar a influência nos resultados de possíveis fontes de incertezas nos parâmetros utilizados. A incorporação da PET-TC ao SUS implicaria a necessidade de recursos adicionais de R$ 158,1 (oferta restrita) a 202,7 milhões (oferta abrangente) em cinco anos, e a diferença entre as duas estratégias de oferta é de R$ 44,6 milhões no período. Em termos absolutos, o IO total seria de R$ 555 milhões (PET-TC para TC negativa) e R$ 600 milhões (PET-TC para todos) no período. O custo do procedimento PET-TC foi o parâmetro de maior influência sobre as estimativas de gastos relacionados à nova tecnologia, seguido da proporção de pacientes submetidos à mediastinoscopia. No cenário por extremos mais otimista, os IOs incrementais reduzir-se-iam para R$ 86,9 (PET-TC para TC negativa) e R$ 103,9 milhões (PET-TC para todos), enquanto no mais pessimista os mesmos aumentariam para R$ 194,0 e R$ 242,2 milhões, respectivamente. Resultados sobre IO, aliados às evidências de custo-efetividade da tecnologia, conferem maior racionalidade às decisões finais dos gestores. A incorporação da PET no estadiamento clínico do CPCNP parece ser financeiramente factível frente à magnitude do orçamento do MS, e potencial redução no número de cirurgias desnecessárias pode levar à maior eficiência na alocação dos recursos disponíveis e melhores desfechos para os pacientes com estratégias terapêuticas mais bem indicadas.