969 resultados para Real-world


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Studies in turbulence often focus on two flow conditions, both of which occur frequently in real-world flows and are sought-after for their value in advancing turbulence theory. These are the high Reynolds number regime and the effect of wall surface roughness. In this dissertation, a Large-Eddy Simulation (LES) recreates both conditions over a wide range of Reynolds numbers Reτ = O(102)-O(108) and accounts for roughness by locally modeling the statistical effects of near-wall anisotropic fine scales in a thin layer immediately above the rough surface. A subgrid, roughness-corrected wall model is introduced to dynamically transmit this modeled information from the wall to the outer LES, which uses a stretched-vortex subgrid-scale model operating in the bulk of the flow. Of primary interest is the Reynolds number and roughness dependence of these flows in terms of first and second order statistics. The LES is first applied to a fully turbulent uniformly-smooth/rough channel flow to capture the flow dynamics over smooth, transitionally rough and fully rough regimes. Results include a Moody-like diagram for the wall averaged friction factor, believed to be the first of its kind obtained from LES. Confirmation is found for experimentally observed logarithmic behavior in the normalized stream-wise turbulent intensities. Tight logarithmic collapse, scaled on the wall friction velocity, is found for smooth-wall flows when Reτ ≥ O(106) and in fully rough cases. Since the wall model operates locally and dynamically, the framework is used to investigate non-uniform roughness distribution cases in a channel, where the flow adjustments to sudden surface changes are investigated. Recovery of mean quantities and turbulent statistics after transitions are discussed qualitatively and quantitatively at various roughness and Reynolds number levels. The internal boundary layer, which is defined as the border between the flow affected by the new surface condition and the unaffected part, is computed, and a collapse of the profiles on a length scale containing the logarithm of friction Reynolds number is presented. Finally, we turn to the possibility of expanding the present framework to accommodate more general geometries. As a first step, the whole LES framework is modified for use in the curvilinear geometry of a fully-developed turbulent pipe flow, with implementation carried out in a spectral element solver capable of handling complex wall profiles. The friction factors have shown favorable agreement with the superpipe data, and the LES estimates of the Karman constant and additive constant of the log-law closely match values obtained from experiment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work concerns itself with the possibility of solutions, both cooperative and market based, to pollution abatement problems. In particular, we are interested in pollutant emissions in Southern California and possible solutions to the abatement problems enumerated in the 1990 Clean Air Act. A tradable pollution permit program has been implemented to reduce emissions, creating property rights associated with various pollutants.

Before we discuss the performance of market-based solutions to LA's pollution woes, we consider the existence of cooperative solutions. In Chapter 2, we examine pollutant emissions as a trans boundary public bad. We show that for a class of environments in which pollution moves in a bi-directional, acyclic manner, there exists a sustainable coalition structure and associated levels of emissions. We do so via a new core concept, one more appropriate to modeling cooperative emissions agreements (and potential defection from them) than the standard definitions.

However, this leaves the question of implementing pollution abatement programs unanswered. While the existence of a cost-effective permit market equilibrium has long been understood, the implementation of such programs has been difficult. The design of Los Angeles' REgional CLean Air Incentives Market (RECLAIM) alleviated some of the implementation problems, and in part exacerbated them. For example, it created two overlapping cycles of permits and two zones of permits for different geographic regions. While these design features create a market that allows some measure of regulatory control, they establish a very difficult trading environment with the potential for inefficiency arising from the transactions costs enumerated above and the illiquidity induced by the myriad assets and relatively few participants in this market.

It was with these concerns in mind that the ACE market (Automated Credit Exchange) was designed. The ACE market utilizes an iterated combined-value call market (CV Market). Before discussing the performance of the RECLAIM program in general and the ACE mechanism in particular, we test experimentally whether a portfolio trading mechanism can overcome market illiquidity. Chapter 3 experimentally demonstrates the ability of a portfolio trading mechanism to overcome portfolio rebalancing problems, thereby inducing sufficient liquidity for markets to fully equilibrate.

With experimental evidence in hand, we consider the CV Market's performance in the real world. We find that as the allocation of permits reduces to the level of historical emissions, prices are increasing. As of April of this year, prices are roughly equal to the cost of the Best Available Control Technology (BACT). This took longer than expected, due both to tendencies to mis-report emissions under the old regime, and abatement technology advances encouraged by the program. Vve also find that the ACE market provides liquidity where needed to encourage long-term planning on behalf of polluting facilities.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dynamic rupture simulations are unique in their contributions to the study of earthquake physics. The current rapid development of dynamic rupture simulations poses several new questions: Do the simulations reflect the real world? Do the simulations have predictive power? Which one should we believe when the simulations disagree? This thesis illustrates how integration with observations can help address these questions and reduce the effects of non-uniqueness of both dynamic rupture simulations and kinematic inversion problems. Dynamic rupture simulations with observational constraints can effectively identify non-physical features inferred from observations. Moreover, the integrative technique can also provide more physical insights into the mechanisms of earthquakes. This thesis demonstrates two examples of such kinds of integration: dynamic rupture simulations of the Mw 9.0 2011 Tohoku-Oki earthquake and of earthquake ruptures in damaged fault zones:

(1) We develop simulations of the Tohoku-Oki earthquake based on a variety of observations and minimum assumptions of model parameters. The simulations provide realistic estimations of stress drop and fracture energy of the region and explain the physical mechanisms of high-frequency radiation in the deep region. We also find that the overridding subduction wedge contributes significantly to the up-dip rupture propagation and large final slip in the shallow region. Such findings are also applicable to other megathrust earthquakes.

(2) Damaged fault zones are usually found around natural faults, but their effects on earthquake ruptures have been largely unknown. We simulate earthquake ruptures in damaged fault zones with material properties constrained by seismic and geological observations. We show that reflected waves in fault zones are effective at generating pulse-like ruptures and head waves tend to accelerate and decelerate rupture speeds. These mechanisms are robust in natural fault zones with large attenuation and off-fault plasticity. Moreover, earthquakes in damaged fault zones can propagate at super-Rayleigh speeds that are unstable in homogeneous media. Supershear transitions in fault zones do not require large fault stresses. In the end, we present observations in the Big Bear region, where variability of rupture speeds of small earthquakes correlates with the laterally variable materials in a damaged fault zone.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.

Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.

This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.

Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.

We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.

Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.

To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.

Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.

To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A soberania já foi conceituada de diversos modos ao longo da história. Apesar disso, não deixou de ser a categoria mais elementar do direito internacional; expressando o fundamento de atuação dos Estados, foi através da soberania que o direito internacional se desenvolveu do Século XVII até os dias de hoje. Isso evidencia uma distinção entre o conteúdo da soberania, quer dizer, o seu modo de manifestação, o seu conceito, que se altera em cada período histórico, de um lado, e, do outro, a forma jurídica internacional expressa pela soberania, que se mantém intacta e que existe independentemente do conteúdo que lhe é dado, quer dizer, o lugar que ela ocupa no direito internacional. Através da análise do conceito de soberania fornecido por três autores clássicos de diferentes períodos históricos Hugo Grotius, Pasquale Mancini e Hans Kelsen o presente trabalho tem por objetivo demonstrar o caráter ideológico de cada teoria e, conseqüentemente, sua inexatidão. Para fazê-lo, foi adotado o método materialista dialético, através do qual a produção de idéias por parte do homem deve ser observada nos limites das suas condições de existência e as idéias produzidas como um reflexo consciente do mundo real. Cuida-se, assim, de observar o direito de superioridade afirmado por Grotius nos limites das condições de existência humana que se alteravam com a transição do feudalismo para capitalismo, e extrai-se o seu sentido da luta entre a Igreja e os monarcas que iam centralizando sob si o poder. Da mesma forma, observa-se o direito de nacionalidade de Mancini sob as condições de existência propiciadas pelo amadurecimento das classes sociais do capitalismo na Europa Ocidental como fruto da Revolução Industrial, extraindo-se seu sentido das lutas revolucionárias por libertação nacional que ali se desenrolavam. O caráter essencialmente limitado da soberania de Kelsen, enfim, será observado no contexto da passagem do capitalismo para sua época imperialista, como um reflexo consciente dos desenvolvimentos experimentados pelo direito internacional no fim do Século XIX e início do Século XX, após a Primeira Guerra Mundial. Assim, além de demonstrar o caráter ideológico e a inexatidão dos conceitos mencionados, busca-se demonstrar que o conteúdo da soberania em cada período histórico analisado encontra sua razão de ser na correspondente fase de desenvolvimento do capitalismo e que a forma jurídica soberania, isto é, o lugar que ela ocupa no direito internacional, é determinado pela necessidade do capitalismo de um instrumento de força que assegure a acumulação de capital, o Estado soberano.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Multi-finger caging offers a rigorous and robust approach to robot grasping. This thesis provides several novel algorithms for caging polygons and polyhedra in two and three dimensions. Caging refers to a robotic grasp that does not necessarily immobilize an object, but prevents it from escaping to infinity. The first algorithm considers caging a polygon in two dimensions using two point fingers. The second algorithm extends the first to three dimensions. The third algorithm considers caging a convex polygon in two dimensions using three point fingers, and considers robustness of this cage to variations in the relative positions of the fingers.

This thesis describes an algorithm for finding all two-finger cage formations of planar polygonal objects based on a contact-space formulation. It shows that two-finger cages have several useful properties in contact space. First, the critical points of the cage representation in the hand’s configuration space appear as critical points of the inter-finger distance function in contact space. Second, these critical points can be graphically characterized directly on the object’s boundary. Third, contact space admits a natural rectangular decomposition such that all critical points lie on the rectangle boundaries, and the sublevel sets of contact space and free space are topologically equivalent. These properties lead to a caging graph that can be readily constructed in contact space. Starting from a desired immobilizing grasp of a polygonal object, the caging graph is searched for the minimal, intermediate, and maximal caging regions surrounding the immobilizing grasp. An example constructed from real-world data illustrates and validates the method.

A second algorithm is developed for finding caging formations of a 3D polyhedron for two point fingers using a lower dimensional contact-space formulation. Results from the two-dimensional algorithm are extended to three dimension. Critical points of the inter-finger distance function are shown to be identical to the critical points of the cage. A decomposition of contact space into 4D regions having useful properties is demonstrated. A geometric analysis of the critical points of the inter-finger distance function results in a catalog of grasps in which the cages change topology, leading to a simple test to classify critical points. With these properties established, the search algorithm from the two-dimensional case may be applied to the three-dimensional problem. An implemented example demonstrates the method.

This thesis also presents a study of cages of convex polygonal objects using three point fingers. It considers a three-parameter model of the relative position of the fingers, which gives complete generality for three point fingers in the plane. It analyzes robustness of caging grasps to variations in the relative position of the fingers without breaking the cage. Using a simple decomposition of free space around the polygon, we present an algorithm which gives all caging placements of the fingers and a characterization of the robustness of these cages.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Climate change is arguably the most critical issue facing our generation and the next. As we move towards a sustainable future, the grid is rapidly evolving with the integration of more and more renewable energy resources and the emergence of electric vehicles. In particular, large scale adoption of residential and commercial solar photovoltaics (PV) plants is completely changing the traditional slowly-varying unidirectional power flow nature of distribution systems. High share of intermittent renewables pose several technical challenges, including voltage and frequency control. But along with these challenges, renewable generators also bring with them millions of new DC-AC inverter controllers each year. These fast power electronic devices can provide an unprecedented opportunity to increase energy efficiency and improve power quality, if combined with well-designed inverter control algorithms. The main goal of this dissertation is to develop scalable power flow optimization and control methods that achieve system-wide efficiency, reliability, and robustness for power distribution networks of future with high penetration of distributed inverter-based renewable generators.

Proposed solutions to power flow control problems in the literature range from fully centralized to fully local ones. In this thesis, we will focus on the two ends of this spectrum. In the first half of this thesis (chapters 2 and 3), we seek optimal solutions to voltage control problems provided a centralized architecture with complete information. These solutions are particularly important for better understanding the overall system behavior and can serve as a benchmark to compare the performance of other control methods against. To this end, we first propose a branch flow model (BFM) for the analysis and optimization of radial and meshed networks. This model leads to a new approach to solve optimal power flow (OPF) problems using a two step relaxation procedure, which has proven to be both reliable and computationally efficient in dealing with the non-convexity of power flow equations in radial and weakly-meshed distribution networks. We will then apply the results to fast time- scale inverter var control problem and evaluate the performance on real-world circuits in Southern California Edison’s service territory.

The second half (chapters 4 and 5), however, is dedicated to study local control approaches, as they are the only options available for immediate implementation on today’s distribution networks that lack sufficient monitoring and communication infrastructure. In particular, we will follow a reverse and forward engineering approach to study the recently proposed piecewise linear volt/var control curves. It is the aim of this dissertation to tackle some key problems in these two areas and contribute by providing rigorous theoretical basis for future work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O foco desta dissertação é a relação entre Pentecostalismo e a tecnologia televisiva na vila de Provetá, uma comunidade evangélica situada na Ilha Grande, município de Angra dos Reis, Estado do Rio de Janeiro. Os residentes de Provetá são majoritariamente membros da Igreja Pentecostal Assembléia de Deus, situada ali desde princípios da década de 1930. A televisão, em contraste, fora introduzida na vila apenas em 1987, mediante o acesso a antenas parabólicas. O objetivo central da dissertação é compreender o processo histórico e social de inserção da tecnologia televisiva na vila de Provetá, e como esta tecnologia se relaciona com mídias e práticas Pentecostais de mediação com o transcendental. Condenada num primeiro momento pelo então pastor presidente da Assembléia de Deus como uma tecnologia exclusivamente diabólica e portanto, irrevogavelmente proibida aos membros da igreja , a televisão fora progressivamente ressignificada pelas lideranças da igreja como uma tecnologia ambivalente, a partir da qual tanto Deus quanto o Diabo seriam capazes de operar. A dissertação explora, nesse sentido, o processo de negociação em torno dos significados religiosos atribuídos à televisão, bem como os regimes normativos relativos ao seu consumo. Dado o status ambivalente da tecnologia televisiva aos olhos das lideranças contemporâneas da igreja seu potencial de estar a serviço de Deus ou do Diabo, do bem ou do mal , busco compreender uma ética do assistir que parece subjazer o processo de autorização do consumo da televisão, isto é, como um crente deve assisti-la. Assistir à televisão de forma eticamente apropriada, portanto, implicaria um conhecimento essencial sobre como assistir. Este conhecimento, argumento, é o conhecimento da Bíblia. Mediante um regime de verdade bíblico, os provetaenses traçariam uma distinção fundamental em relação aos conteúdos televisivos considerados factuais ou reais, cujo consumo seria inofensivo ou benéfico, e aqueles considerados forjados, encenados ou construídos, cujo consumo seria inapropriado, ou ainda perigoso. Partindo da identificação das telenovelas da Rede Globo como mentiras de natureza diabólica, e do telejornal como um espaço de aprendizado para o crente, empreendo uma análise de recepção destes programas, buscando compreender as dinâmicas simbólicas e sensoriais de identificação das presenças e das agências de Deus e do Diabo através da televisão. O argumento central a ser desenvolvido na dissertação é o de que a experiência de assistir à televisão em Provetá encontra-se estruturada por uma estética Pentecostal fomentada a partir da Bíblia. A partir das práticas religiosas de mediação centradas na Bíblia, um regime sensorial Pentecostal fora progressivamente constituído: uma dada gramática do sentir os objetos da experiência em relação ao regime de verdade Bíblico. Têm-se, desse modo, uma dinâmica circular no interior da qual imagens são experimentadas sensivelmente a partir do prisma da verdade Bíblica, e tais sensações, por sua vez, objetificam a realidade dessa verdade no corpo do sujeito. Nessa dialética entre o crer e o sentir, a experiência sensível de assistir à televisão é informada por um entendimento Pentecostal da realidade, e esse entendimento é autenticado por aquilo que é sentido.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Most wearable activity recognition systems assume a predefined sensor deployment that remains unchanged during runtime. However, this assumption does not reflect real-life conditions. During the normal use of such systems, users may place the sensors in a position different from the predefined sensor placement. Also, sensors may move from their original location to a different one, due to a loose attachment. Activity recognition systems trained on activity patterns characteristic of a given sensor deployment may likely fail due to sensor displacements. In this work, we innovatively explore the effects of sensor displacement induced by both the intentional misplacement of sensors and self-placement by the user. The effects of sensor displacement are analyzed for standard activity recognition techniques, as well as for an alternate robust sensor fusion method proposed in a previous work. While classical recognition models show little tolerance to sensor displacement, the proposed method is proven to have notable capabilities to assimilate the changes introduced in the sensor position due to self-placement and provides considerable improvements for large misplacements.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A partir de uma proposta de ampliação do conceito de metaficção, a tese tem como objetivo investigar os usos metaficcionais nas obras de dois autores da literatura brasileira contemporânea: Sérgio SantAnna e Rubens Figueiredo. No caso de Sérgio SantAnna, são investigadas as relações que a metaficção estabelece com o controle do imaginário, com a reflexão estética feita no interior dos textos ficcionais e com a polêmica em torno das vanguardas. No caso de Rubens Figueiredo, a discussão sobre o uso do discurso metaficcional levará em conta os três ciclos pelos quais passa a obra do autor, ciclos esses marcados pela ênfase em determinadas questões, tais como a parodização de romances policiais, a questão da identidade e a problemática do social. O olhar panorâmico lançado para a obra ficcional dos dois autores deverá mostrar como os diferentes usos metaficcionais − ao atribuírem valores diferenciados à ligação que a literatura tem com o chamado mundo real − sinalizam duas concepções quase antagônicas do fenômeno literário

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Geração e Simplificação da Base de Conhecimento de um Sistema Híbrido Fuzzy- Genético propõe uma metodologia para o desenvolvimento da base de conhecimento de sistemas fuzzy, fundamentada em técnicas de computação evolucionária. Os sistemas fuzzy evoluídos são avaliados segundo dois critérios distintos: desempenho e interpretabilidade. Uma metodologia para a análise de problemas multiobjetivo utilizando a Lógica Fuzzy foi também desenvolvida para esse fim e incorporada ao processo de avaliação dos AGs. Os sistemas fuzzy evoluídos foram avaliados através de simulações computacionais e os resultados obtidos foram comparados com os obtidos por outros métodos em diferentes tipos de aplicações. O uso da metodologia proposta demonstrou que os sistemas fuzzy evoluídos possuem um bom desempenho aliado a uma boa interpretabilidade da sua base de conhecimento, tornando viável a sua utilização no projeto de sistemas reais.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Perhaps the most difficult job of the ecotoxicologist is extrapolating data calculated from laboratory experiments with high precision and accuracy into the real world of highly-dynamics aquatic environments. The establishment of baseline laboratory toxicity testing data for individual compounds and ecologically important and field studies serve as a precursor to ecosystem level studies needed for ecological risk assessment. The first stage in the field portion of risk assessment is the determination of actual environmental concentrations of the contaminant being studied and matching those concentrations with laboratory toxicity tests. Risk estimates can be produced via risk quotients that would determine the probability that adverse effects may occur. In this first stage of risk assessment, environmental realism is often not achieved. This is due, in part, to the fact that single-species laboratory toxicity tests, while highly controlled, do not account for the complex interactions (Chemical, physical, and biological) that take place in the natural environment. By controlling as many variables in the laboratory as possible, an experiment can be produced in such a fashion that real effects from a compound can be determined for a particular test organism. This type of approach obviously makes comparison with real world data most difficult. Conversely, field oriented studies fall short in the interpretation of ecological risk assessment because of low statistical power, lack of adequate replicaiton, and the enormous amount of time and money needed to perform such studies. Unlike a controlled laboratory bioassay, many other stressors other than the chemical compound in question affect organisms in the environment. These stressors range from natural occurrences (such as changes in temperature, salinity, and community interactions) to other confounding anthropogenic inputs. Therefore, an improved aquatic toxicity test that will enhance environmental realism and increase the accuracy of future ecotoxicological risk assessments is needed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We introduce the Pitman Yor Diffusion Tree (PYDT) for hierarchical clustering, a generalization of the Dirichlet Diffusion Tree (Neal, 2001) which removes the restriction to binary branching structure. The generative process is described and shown to result in an exchangeable distribution over data points. We prove some theoretical properties of the model and then present two inference methods: a collapsed MCMC sampler which allows us to model uncertainty over tree structures, and a computationally efficient greedy Bayesian EM search algorithm. Both algorithms use message passing on the tree structure. The utility of the model and algorithms is demonstrated on synthetic and real world data, both continuous and binary.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cluster analysis of ranking data, which occurs in consumer questionnaires, voting forms or other inquiries of preferences, attempts to identify typical groups of rank choices. Empirically measured rankings are often incomplete, i.e. different numbers of filled rank positions cause heterogeneity in the data. We propose a mixture approach for clustering of heterogeneous rank data. Rankings of different lengths can be described and compared by means of a single probabilistic model. A maximum entropy approach avoids hidden assumptions about missing rank positions. Parameter estimators and an efficient EM algorithm for unsupervised inference are derived for the ranking mixture model. Experiments on both synthetic data and real-world data demonstrate significantly improved parameter estimates on heterogeneous data when the incomplete rankings are included in the inference process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Human locomotion is known to be influenced by observation of another person's gait. For example, athletes often synchronize their step in long distance races. However, how interaction with a virtual runner affects the gait of a real runner has not been studied. We investigated this by creating an illusion of running behind a virtual model (VM) using a treadmill and large screen virtual environment showing a video of a VM. We looked at step synchronization between the real and virtual runner and at the role of the step frequency (SF) in the real runner's perception of VM speed. We found that subjects match VM SF when asked to match VM speed with their own (Figure 1). This indicates step synchronization may be a strategy of speed matching or speed perception. Subjects chose higher speeds when VMSF was higher (though VM was 12km/h in all videos). This effect was more pronounced when the speed estimate was rated verbally while standing still. (Figure 2). This may due to correlated physical activity affecting the perception of VM speed [Jacobs et al. 2005]; or step synchronization altering the subjects' perception of self speed [Durgin et al. 2007]. Our findings indicate that third person activity in a collaborative virtual locomotive environment can have a pronounced effect on an observer's gait activity and their perceptual judgments of the activity of others: the SF of others (virtual or real) can potentially influence one's perception of self speed and lead to changes in speed and SF. A better understanding of the underlying mechanisms would support the design of more compelling virtual trainers and may be instructive for competitive athletics in the real world. © 2009 ACM.