912 resultados para multi-dimensional
Resumo:
This paper is addressed to the numerical solving of the rendering equation in realistic image creation. The rendering equation is integral equation describing the light propagation in a scene accordingly to a given illumination model. The used illumination model determines the kernel of the equation under consideration. Nowadays, widely used are the Monte Carlo methods for solving the rendering equation in order to create photorealistic images. In this work we consider the Monte Carlo solving of the rendering equation in the context of the parallel sampling scheme for hemisphere. Our aim is to apply this sampling scheme to stratified Monte Carlo integration method for parallel solving of the rendering equation. The domain for integration of the rendering equation is a hemisphere. We divide the hemispherical domain into a number of equal sub-domains of orthogonal spherical triangles. This domain partitioning allows to solve the rendering equation in parallel. It is known that the Neumann series represent the solution of the integral equation as a infinity sum of integrals. We approximate this sum with a desired truncation error (systematic error) receiving the fixed number of iteration. Then the rendering equation is solved iteratively using Monte Carlo approach. At each iteration we solve multi-dimensional integrals using uniform hemisphere partitioning scheme. An estimate of the rate of convergence is obtained using the stratified Monte Carlo method. This domain partitioning allows easy parallel realization and leads to convergence improvement of the Monte Carlo method. The high performance and Grid computing of the corresponding Monte Carlo scheme are discussed.
Resumo:
K-Means is a popular clustering algorithm which adopts an iterative refinement procedure to determine data partitions and to compute their associated centres of mass, called centroids. The straightforward implementation of the algorithm is often referred to as `brute force' since it computes a proximity measure from each data point to each centroid at every iteration of the K-Means process. Efficient implementations of the K-Means algorithm have been predominantly based on multi-dimensional binary search trees (KD-Trees). A combination of an efficient data structure and geometrical constraints allow to reduce the number of distance computations required at each iteration. In this work we present a general space partitioning approach for improving the efficiency and the scalability of the K-Means algorithm. We propose to adopt approximate hierarchical clustering methods to generate binary space partitioning trees in contrast to KD-Trees. In the experimental analysis, we have tested the performance of the proposed Binary Space Partitioning K-Means (BSP-KM) when a divisive clustering algorithm is used. We have carried out extensive experimental tests to compare the proposed approach to the one based on KD-Trees (KD-KM) in a wide range of the parameters space. BSP-KM is more scalable than KDKM, while keeping the deterministic nature of the `brute force' algorithm. In particular, the proposed space partitioning approach has shown to overcome the well-known limitation of KD-Trees in high-dimensional spaces and can also be adopted to improve the efficiency of other algorithms in which KD-Trees have been used.
Resumo:
In a global business economy, firms have a broad range of corporate real estate needs. During the past decade, multiple strategies and tactics have emerged in the corporate real estate community for meeting those needs. We propose here a framework for analysing and prioritising the various types of risk inherent in corporate real estate decisions. From a business strategy perspective, corporate real estate must serve needs beyond the simple one of shelter for the workforce and production process. Certain uses are strategic in that they allow access to externalities, embody the business strategy, or provide entrée to new markets. Other uses may be tactical, in that they arise from business activities of relatively short duration or provide an opportunity to pre-empt competitors. Still other corporate real estate uses can be considered “core” to the existence of the business enterprise. These might be special use properties or may be generic buildings that have become embodiments of the organisation’s culture. We argue that a multi-dimensional matrix approach organised around three broad themes and nine sub-categories allow the decision-maker to organise and evaluate choices with an acceptable degree of rigor and thoroughness. The three broad themes are Use (divided into Core, Cyclical or Casual) – Asset Type (which can be Strategic, Specialty or Generic) and Market Environment (which ranges from Mature Domestic to Emerging Economy). Proper understanding of each of these groupings brings critical variables to the fore and allows for efficient resource allocation and enhanced risk management.
Resumo:
The Self-Organizing Map (SOM) is a popular unsupervised neural network able to provide effective clustering and data visualization for multidimensional input datasets. In this paper, we present an application of the simulated annealing procedure to the SOM learning algorithm with the aim to obtain a fast learning and better performances in terms of quantization error. The proposed learning algorithm is called Fast Learning Self-Organized Map, and it does not affect the easiness of the basic learning algorithm of the standard SOM. The proposed learning algorithm also improves the quality of resulting maps by providing better clustering quality and topology preservation of input multi-dimensional data. Several experiments are used to compare the proposed approach with the original algorithm and some of its modification and speed-up techniques.
Resumo:
In this paper we consider the Brownian motion with jump boundary and present a new proof of a recent result of Li, Leung and Rakesh concerning the exact convergence rate in the one-dimensional case. Our methods are dierent and mainly probabilistic relying on coupling methods adapted to the special situation under investigation. Moreover we answer a question raised by Ben-Ari and Pinsky concerning the dependence of the spectral gap from the jump distribution in a multi-dimensional setting.
Resumo:
In 'Avalanche', an object is lowered, players staying in contact throughout. Normally the task is easily accomplished. However, with larger groups counter-intuitive behaviours appear. The paper proposes a formal theory for the underlying causal mechanisms. The aim is to not only provide an explicit, testable hypothesis for the source of the observed modes of behaviour-but also to exemplify the contribution that formal theory building can make to understanding complex social phenomena. Mapping reveals the importance of geometry to the Avalanche game; each player has a pair of balancing loops, one involved in lowering the object, the other ensuring contact. For more players, sets of balancing loops interact and these can allow dominance by reinforcing loops, causing the system to chase upwards towards an ever-increasing goal. However, a series of other effects concerning human physiology and behaviour (HPB) is posited as playing a role. The hypothesis is therefore rigorously tested using simulation. For simplicity a 'One Degree of Freedom' case is examined, allowing all of the effects to be included whilst rendering the analysis more transparent. Formulation and experimentation with the model gives insight into the behaviours. Multi-dimensional rate/level analysis indicates that there is only a narrow region in which the system is able to move downwards. Model runs reproduce the single 'desired' mode of behaviour and all three of the observed 'problematic' ones. Sensitivity analysis gives further insight into the system's modes and their causes. Behaviour is seen to arise only when the geometric effects apply (number of players greater than degrees of freedom of object) in combination with a range of HPB effects. An analogy exists between the co-operative behaviour required here and various examples: conflicting strategic objectives in organizations; Prisoners' Dilemma and integrated bargaining situations. Additionally, the game may be relatable in more direct algebraic terms to situations involving companies in which the resulting behaviours are mediated by market regulations. Finally, comment is offered on the inadequacy of some forms of theory building and the case is made for formal theory building involving the use of models, analysis and plausible explanations to create deep understanding of social phenomena.
Resumo:
Global communication requirements and load imbalance of some parallel data mining algorithms are the major obstacles to exploit the computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication cost in iterative parallel data mining algorithms. In particular, the analysis focuses on one of the most influential and popular data mining methods, the k-means algorithm for cluster analysis. The straightforward parallel formulation of the k-means algorithm requires a global reduction operation at each iteration step, which hinders its scalability. This work studies a different parallel formulation of the algorithm where the requirement of global communication can be relaxed while still providing the exact solution of the centralised k-means algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real world distributed applications or can be induced by means of multi-dimensional binary search trees. The approach can also be extended to accommodate an approximation error which allows a further reduction of the communication costs.
Resumo:
Empowerment is a standard but ambiguous element of development rhetoric and so, through the socially complex and contested terrain of South Africa, this paper explores its potential to contribute to inclusive development. Investigating micro-level engagements with the national strategy of Broad-Based Black Economic Empowerment (B-BBEE) in the South African wine industry highlights the limitations, but also potential, of this single domain approach. However, latent paternalism, entrenched interests and a ‘dislocated blackness’ maintain a complex racial politics that shapes both power relations and the opportunities for transformation within the industry. Nonetheless, while B-BBEE may not, in reality, be broad-based its manifestations are contributing to challenging racist structures and normalising changing attitudes. This paper concludes that, to be transformative, empowerment needs to be re-embedded within South Africa as a multi-scalar, multi-dimensional dialogue and, despite the continuation of structural constraints, positions the local as a critical scale at which to initiate broader social change.
Resumo:
Classical regression methods take vectors as covariates and estimate the corresponding vectors of regression parameters. When addressing regression problems on covariates of more complex form such as multi-dimensional arrays (i.e. tensors), traditional computational models can be severely compromised by ultrahigh dimensionality as well as complex structure. By exploiting the special structure of tensor covariates, the tensor regression model provides a promising solution to reduce the model’s dimensionality to a manageable level, thus leading to efficient estimation. Most of the existing tensor-based methods independently estimate each individual regression problem based on tensor decomposition which allows the simultaneous projections of an input tensor to more than one direction along each mode. As a matter of fact, multi-dimensional data are collected under the same or very similar conditions, so that data share some common latent components but can also have their own independent parameters for each regression task. Therefore, it is beneficial to analyse regression parameters among all the regressions in a linked way. In this paper, we propose a tensor regression model based on Tucker Decomposition, which identifies not only the common components of parameters across all the regression tasks, but also independent factors contributing to each particular regression task simultaneously. Under this paradigm, the number of independent parameters along each mode is constrained by a sparsity-preserving regulariser. Linked multiway parameter analysis and sparsity modeling further reduce the total number of parameters, with lower memory cost than their tensor-based counterparts. The effectiveness of the new method is demonstrated on real data sets.
Resumo:
The purpose of this paper is to investigate several analytical methods of solving first passage (FP) problem for the Rouse model, a simplest model of a polymer chain. We show that this problem has to be treated as a multi-dimensional Kramers' problem, which presents rich and unexpected behavior. We first perform direct and forward-flux sampling (FFS) simulations, and measure the mean first-passage time $\tau(z)$ for the free end to reach a certain distance $z$ away from the origin. The results show that the mean FP time is getting faster if the Rouse chain is represented by more beads. Two scaling regimes of $\tau(z)$ are observed, with transition between them varying as a function of chain length. We use these simulations results to test two theoretical approaches. One is a well known asymptotic theory valid in the limit of zero temperature. We show that this limit corresponds to fully extended chain when each chain segment is stretched, which is not particularly realistic. A new theory based on the well known Freidlin-Wentzell theory is proposed, where dynamics is projected onto the minimal action path. The new theory predicts both scaling regimes correctly, but fails to get the correct numerical prefactor in the first regime. Combining our theory with the FFS simulations lead us to a simple analytical expression valid for all extensions and chain lengths. One of the applications of polymer FP problem occurs in the context of branched polymer rheology. In this paper, we consider the arm-retraction mechanism in the tube model, which maps exactly on the model we have solved. The results are compared to the Milner-McLeish theory without constraint release, which is found to overestimate FP time by a factor of 10 or more.
Resumo:
This study investigates flash flood forecast and warning communication, interpretation, and decision making, using data from a survey of 418 members of the public in Boulder, Colorado, USA. Respondents to the public survey varied in their perceptions and understandings of flash flood risks in Boulder, and some had misconceptions about flash flood risks, such as the safety of crossing fast-flowing water. About 6% of respondents indicated consistent reversals of US watch-warning alert terminology. However, more in-depth analysis illustrates the multi-dimensional, situationally dependent meanings of flash flood alerts, as well as the importance of evaluating interpretation and use of warning information along with alert terminology. Some public respondents estimated low likelihoods of flash flooding given a flash flood warning; these were associated with lower anticipated likelihood of taking protective action given a warning. Protective action intentions were also lower among respondents who had less trust in flash flood warnings, those who had not made prior preparations for flash flooding, and those who believed themselves to be safer from flash flooding. Additional analysis, using open-ended survey questions about responses to warnings, elucidates the complex, contextual nature of protective decision making during flash flood threats. These findings suggest that warnings can play an important role not only by notifying people that there is a threat and helping motivate people to take protective action, but also by helping people evaluate what actions to take given their situation.
Resumo:
We introduce jump processes in R(k), called density-profile processes, to model biological signaling networks. Our modeling setup describes the macroscopic evolution of a finite-size spin-flip model with k types of spins with arbitrary number of internal states interacting through a non-reversible stochastic dynamics. We are mostly interested on the multi-dimensional empirical-magnetization vector in the thermodynamic limit, and prove that, within arbitrary finite time-intervals, its path converges almost surely to a deterministic trajectory determined by a first-order (non-linear) differential equation with explicit bounds on the distance between the stochastic and deterministic trajectories. As parameters of the spin-flip dynamics change, the associated dynamical system may go through bifurcations, associated to phase transitions in the statistical mechanical setting. We present a simple example of spin-flip stochastic model, associated to a synthetic biology model known as repressilator, which leads to a dynamical system with Hopf and pitchfork bifurcations. Depending on the parameter values, the magnetization random path can either converge to a unique stable fixed point, converge to one of a pair of stable fixed points, or asymptotically evolve close to a deterministic orbit in Rk. We also discuss a simple signaling pathway related to cancer research, called p53 module.
Resumo:
No processo de classificação de uma imagem digital, o atributo textura pode ser uma fonte importante de informações. Embora o processo de caracterização da textura em uma imagem seja mais difícil, se comparado ao processo de caracterização de atributos espectrais, sabe-se que o emprego daquele atributo pode aumentar significativamente a exatidão na classificação da imagem. O objetivo deste trabalho de pesquisa consiste em desenvolver e testar um método de classificação supervisionado em imagens digitais com base em atributos de textura. O método proposto implementa um processo de filtragem baseado nos filtros de Gabor. Inicialmente, é gerado um conjunto de filtros de Gabor adequados às freqüências espaciais associadas às diferentes classes presentes na imagem a ser classificada. Em cada caso, os parâmetros utilizados por cada filtro são estimados a partir das amostras disponíveis, empregando-se a transformada de Fourier. Cada filtro gera, então, uma imagem filtrada que quantifica a freqüência espacial definida no filtro. Este processo resulta em um certo número de imagens filtradas as quais são denominadas de "bandas texturais". Desta forma, o problema que era originalmente unidimensional passa a ser multi-dimensional, em que cada pixel passa a ser definido por um vetor cuja dimensionalidade é idêntica ao número de filtros utilizados. A imagem em várias "bandas texturais" pode ser classificada utilizando-se um método de classificação supervisionada. No presente trabalho foi utilizada a Máxima Verossimilhança Gaussiana. A metodologia proposta é então testada, utilizandose imagens sintéticas e real. Os resultados obtidos são apresentados e analisados.
Resumo:
O crescimento da importância de relacionamentos inter-organizacionais na pesquisa e prática nos negócios tem sido observado por diversos autores, que têm colocado a colaboração no centro das análises sobre como companhias competem (ANDERSON; NARUS, 1990; HILL, 1990; MORGAN; HUNT, 1994; WILSON, 1995; DONALDSON; O’TOOLE, 2000). Perspectivas teóricas em Marketing, em Economia, em Estratégia e em Gestão de Operações, revelam que a colaboração tem impacto positivo no desempenho das firmas. No entanto, apesar da popularidade e benefícios dos relacionamentos inter-organizacionais, nem todas as evidências são positivas (COMBS; KETCHEN, 1999; HILL, 1990; MADHOK; TALLMAN, 1998), e existem dificuldades na medição e na análise da relação entre estes construtos. Visando contribuir com resultados conclusivos através de medidas validadas sobre a relação entre colaboração e desempenho, esta pesquisa estabeleceu como foco a indústria de embalagens, tendo como unidade de análise os fabricantes de embalagens do Brasil. Ambos construtos, colaboração e desempenho, foram tratados como multi-dimensionais, com a colaboração sendo analisada em duas interfaces da empresa focal: com os clientes e com os fornecedores. A colaboração analisada em quatro dimensões (flexibilidade, troca de informações, resolução conjunta de problemas e restrição ao uso de poder) versus o desempenho, com quatro dimensões de desempenho operacional (flexibilidade, qualidade, tempo e custo) e o desempenho financeiro com duas dimensões (crescimento e lucratividade). Os resultados e testes das hipóteses evidenciaram que existe relação entre colaboração com fornecedores e clientes e desempenho, mas a natureza desta relação não é simples e universal. Quando a colaboração e desempenho são tratados com sua natureza multidimensional, os efeitos da colaboração se manifestam com intensidade diferente, nas diferentes dimensões de desempenho. Este estudo tem implicações gerenciais evidentes ao apontar para quais dimensões da colaboração em cada interface do fabricante de embalagem têm impacto sobre cada desempenho, permitindo focar nos aspectos mais relevantes do relacionamento para obter melhor retorno.
Resumo:
Este estudo examina a relação entre estrutura de propriedade e valor das empresas, tratando a estrutura de propriedade de forma endógena e multi-dimensional. Para isso, o modelo desenvolvido por Demsetz e Villalonga (2001) é aplicado em uma amostra de 192 empresas listadas na Bovespa entre 2006 e 2008. Os resultados indicam que o valor da firma pode afetar a concentração da estrutura de propriedade, mas não vice-versa. As evidências obtidas também indicam que, no Brasil, a variável da estrutura de propriedade que influencia, de forma negativa, o valor das empresas é o potencial de expropriação dos acionistas minoritários pelos acionistas controladores, representada pelo desvio entre a concentração do direito de controle e a concentração do direito de fluxo de caixa.