903 resultados para Hierarchical model
Resumo:
Conventional understandings of what the Westminster model implies anticipate reliance on a top-down, hierarchical approach to budgetary accountability, reinforced by a post–New Public Management emphasis on recentralizing administrative capacity. This article, based on a comparative analysis of the experiences of Britain and Ireland, argues that the Westminster model of bureaucratic control and oversight itself has been evolving, hastened in large part due to the global financial crisis. Governments have gained stronger controls over the structures and practices of agencies, but agencies are also key players in securing better governance outcomes. The implication is that the crisis has not seen a return to the archetypal command-and-control model, nor a wholly new implementation of negotiated European-type practices, but rather a new accountability balance between elements of the Westminster system itself that have not previously been well understood.
Resumo:
The high level of unemployment is one of the major problems in most European countries nowadays. Hence, the demand for small area labor market statistics has rapidly increased over the past few years. The Labour Force Survey (LFS) conducted by the Portuguese Statistical Office is the main source of official statistics on the labour market at the macro level (e.g. NUTS2 and national level). However, the LFS was not designed to produce reliable statistics at the micro level (e.g. NUTS3, municipalities or further disaggregate level) due to small sample sizes. Consequently, traditional design-based estimators are not appropriate. A solution to this problem is to consider model-based estimators that "borrow information" from related areas or past samples by using auxiliary information. This paper reviews, under the model-based approach, Best Linear Unbiased Predictors and an estimator based on the posterior predictive distribution of a Hierarchical Bayesian model. The goal of this paper is to analyze the possibility to produce accurate unemployment rate statistics at micro level from the Portuguese LFS using these kinds of stimators. This paper discusses the advantages of using each approach and the viability of its implementation.
Resumo:
A biological disparity energy model can estimate local depth information by using a population of V1 complex cells. Instead of applying an analytical model which explicitly involves cell parameters like spatial frequency, orientation, binocular phase and position difference, we developed a model which only involves the cells’ responses, such that disparity can be extracted from a population code, using only a set of previously trained cells with random-dot stereograms of uniform disparity. Despite good results in smooth regions, the model needs complementary processing, notably at depth transitions. We therefore introduce a new model to extract disparity at keypoints such as edge junctions, line endings and points with large curvature. Responses of end-stopped cells serve to detect keypoints, and those of simple cells are used to detect orientations of their underlying line and edge structures. Annotated keypoints are then used in the leftright matching process, with a hierarchical, multi-scale tree structure and a saliency map to segregate disparity. By combining both models we can (re)define depth transitions and regions where the disparity energy model is less accurate.
Resumo:
The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, there were identified five broad selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. After the identification criteria, a survey was elaborated and companies were contacted in order to understand which factors have more weight in their decisions to choose the partners. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP) method or Value Analysis. The goal of the paper it's to supply a selection reference model that can represent an orientation/pattern for a decision making on the suppliers/partners selection process
Resumo:
Dehumanizing ideologies that explicitly liken other humans to “inferior” animals can have negative consequences for intergroup attitudes and relations. Surprisingly, very little is known about the causes of dehumanization, and essentially no research has examined strategies for reducing dehumanizing tendencies. The Interspecies Model of Prejudice specifies that animalistic dehumanization may be rooted in basic hierarchical beliefs regarding human superiority over animals. This theoretical reasoning suggests that narrowing the human-animal divide should also reduce dehumanization. The purpose of the present dissertation, therefore, was to gain a more complete understanding of the predictors of and solutions to dehumanization by examining the Interspecies Model of Prejudice, first from a layperson’s perspective and then among young children. In Study 1, laypeople strongly rejected the human-animal divide as a probable cause of, or solution to, dehumanization, despite evidence that their own personal beliefs in the human-animal divide positively predicted their dehumanization (and prejudice) scores. From Study 1, it was concluded that the human-animal divide, despite being a robust empirical predictor of dehumanization, is largely unrecognized as a probable cause of, or solution to, dehumanization by non-experts in the psychology of prejudice. Studies 2 and 3 explored the expression of dehumanization, as well as the Interspecies Model of Prejudice, among children ages six to ten years (Studies 2 and 3) and parents (Study 3). Across both studies, White children showed evidence of racial dehumanization by attributing a Black child target fewer “uniquely human” characteristics than the White child target, representing the first systematic evidence of racial dehumanization among children. In Study 3, path analyses supported the Interspecies Model of Prejudice among children. Specifically, children’s beliefs in the human-animal divide predicted greater racial prejudice, an effect explained by heightened racial dehumanization. Moreover, parents’ Social Dominance Orientation (preference for social hierarchy and inequality) positively predicted children’s human-animal divide beliefs. Critically, these effects remained significant even after controlling for established predictors of child-prejudice (i.e., parent prejudice, authoritarian parenting, and social-cognitive skills) and relevant child demographics (i.e., age and sex). Similar patterns emerged among parent participants, further supporting the Interspecies Model of Prejudice. Encouragingly, children reported narrower human-animal divide perceptions after being exposed to an experimental prime (versus control) that highlighted the similarities among humans and animals. Together the three studies reported in this dissertation offer important and novel contributions to the dehumanization and prejudice literature. Not only did we find the first systematic evidence of racial dehumanization among children, we established the human-animal divide as a meaningful dehumanization precursor. Moreover, empirical support was obtained for the Interspecies Model of Prejudice among diverse samples including university students (Study 1), children (Studies 2 and 3), and adult-aged samples (Study 3). Importantly, each study also highlights the promising social implication of targeting the human-animal divide in interventions to reduce dehumanization and other prejudicial processes.
Resumo:
We study the dynamics of a game-theoretic network formation model that yields large-scale small-world networks. So far, mostly stochastic frameworks have been utilized to explain the emergence of these networks. On the other hand, it is natural to seek for game-theoretic network formation models in which links are formed due to strategic behaviors of individuals, rather than based on probabilities. Inspired by Even-Dar and Kearns (2007), we consider a more realistic model in which the cost of establishing each link is dynamically determined during the course of the game. Moreover, players are allowed to put transfer payments on the formation of links. Also, they must pay a maintenance cost to sustain their direct links during the game. We show that there is a small diameter of at most 4 in the general set of equilibrium networks in our model. Unlike earlier model, not only the existence of equilibrium networks is guaranteed in our model, but also these networks coincide with the outcomes of pairwise Nash equilibrium in network formation. Furthermore, we provide a network formation simulation that generates small-world networks. We also analyze the impact of locating players in a hierarchical structure by constructing a strategic model, where a complete b-ary tree is the seed network.
Resumo:
This thesis is an outcome of the investigations carried out on the development of an Artificial Neural Network (ANN) model to implement 2-D DFT at high speed. A new definition of 2-D DFT relation is presented. This new definition enables DFT computation organized in stages involving only real addition except at the final stage of computation. The number of stages is always fixed at 4. Two different strategies are proposed. 1) A visual representation of 2-D DFT coefficients. 2) A neural network approach. The visual representation scheme can be used to compute, analyze and manipulate 2D signals such as images in the frequency domain in terms of symbols derived from 2x2 DFT. This, in turn, can be represented in terms of real data. This approach can help analyze signals in the frequency domain even without computing the DFT coefficients. A hierarchical neural network model is developed to implement 2-D DFT. Presently, this model is capable of implementing 2-D DFT for a particular order N such that ((N))4 = 2. The model can be developed into one that can implement the 2-D DFT for any order N upto a set maximum limited by the hardware constraints. The reported method shows a potential in implementing the 2-D DF T in hardware as a VLSI / ASIC
Resumo:
Knowledge discovery in databases is the non-trivial process of identifying valid, novel potentially useful and ultimately understandable patterns from data. The term Data mining refers to the process which does the exploratory analysis on the data and builds some model on the data. To infer patterns from data, data mining involves different approaches like association rule mining, classification techniques or clustering techniques. Among the many data mining techniques, clustering plays a major role, since it helps to group the related data for assessing properties and drawing conclusions. Most of the clustering algorithms act on a dataset with uniform format, since the similarity or dissimilarity between the data points is a significant factor in finding out the clusters. If a dataset consists of mixed attributes, i.e. a combination of numerical and categorical variables, a preferred approach is to convert different formats into a uniform format. The research study explores the various techniques to convert the mixed data sets to a numerical equivalent, so as to make it equipped for applying the statistical and similar algorithms. The results of clustering mixed category data after conversion to numeric data type have been demonstrated using a crime data set. The thesis also proposes an extension to the well known algorithm for handling mixed data types, to deal with data sets having only categorical data. The proposed conversion has been validated on a data set corresponding to breast cancer. Moreover, another issue with the clustering process is the visualization of output. Different geometric techniques like scatter plot, or projection plots are available, but none of the techniques display the result projecting the whole database but rather demonstrate attribute-pair wise analysis
Resumo:
As the number of processors in distributed-memory multiprocessors grows, efficiently supporting a shared-memory programming model becomes difficult. We have designed the Protocol for Hierarchical Directories (PHD) to allow shared-memory support for systems containing massive numbers of processors. PHD eliminates bandwidth problems by using a scalable network, decreases hot-spots by not relying on a single point to distribute blocks, and uses a scalable amount of space for its directories. PHD provides a shared-memory model by synthesizing a global shared memory from the local memories of processors. PHD supports sequentially consistent read, write, and test- and-set operations. This thesis also introduces a method of describing locality for hierarchical protocols and employs this method in the derivation of an abstract model of the protocol behavior. An embedded model, based on the work of Johnson[ISCA19], describes the protocol behavior when mapped to a k-ary n-cube. The thesis uses these two models to study the average height in the hierarchy that operations reach, the longest path messages travel, the number of messages that operations generate, the inter-transaction issue time, and the protocol overhead for different locality parameters, degrees of multithreading, and machine sizes. We determine that multithreading is only useful for approximately two to four threads; any additional interleaving does not decrease the overall latency. For small machines and high locality applications, this limitation is due mainly to the length of the running threads. For large machines with medium to low locality, this limitation is due mainly to the protocol overhead being too large. Our study using the embedded model shows that in situations where the run length between references to shared memory is at least an order of magnitude longer than the time to process a single state transition in the protocol, applications exhibit good performance. If separate controllers for processing protocol requests are included, the protocol scales to 32k processor machines as long as the application exhibits hierarchical locality: at least 22% of the global references must be able to be satisfied locally; at most 35% of the global references are allowed to reach the top level of the hierarchy.
Resumo:
We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.
Resumo:
The visual recognition of complex movements and actions is crucial for communication and survival in many species. Remarkable sensitivity and robustness of biological motion perception have been demonstrated in psychophysical experiments. In recent years, neurons and cortical areas involved in action recognition have been identified in neurophysiological and imaging studies. However, the detailed neural mechanisms that underlie the recognition of such complex movement patterns remain largely unknown. This paper reviews the experimental results and summarizes them in terms of a biologically plausible neural model. The model is based on the key assumption that action recognition is based on learned prototypical patterns and exploits information from the ventral and the dorsal pathway. The model makes specific predictions that motivate new experiments.
Resumo:
Understanding how the human visual system recognizes objects is one of the key challenges in neuroscience. Inspired by a large body of physiological evidence (Felleman and Van Essen, 1991; Hubel and Wiesel, 1962; Livingstone and Hubel, 1988; Tso et al., 2001; Zeki, 1993), a general class of recognition models has emerged which is based on a hierarchical organization of visual processing, with succeeding stages being sensitive to image features of increasing complexity (Hummel and Biederman, 1992; Riesenhuber and Poggio, 1999; Selfridge, 1959). However, these models appear to be incompatible with some well-known psychophysical results. Prominent among these are experiments investigating recognition impairments caused by vertical inversion of images, especially those of faces. It has been reported that faces that differ "featurally" are much easier to distinguish when inverted than those that differ "configurally" (Freire et al., 2000; Le Grand et al., 2001; Mondloch et al., 2002) ??finding that is difficult to reconcile with the aforementioned models. Here we show that after controlling for subjects' expectations, there is no difference between "featurally" and "configurally" transformed faces in terms of inversion effect. This result reinforces the plausibility of simple hierarchical models of object representation and recognition in cortex.
Resumo:
Our goal in this paper is to assess reliability and validity of egocentered network data using multilevel analysis (Muthen, 1989, Hox, 1993) under the multitrait-multimethod approach. The confirmatory factor analysis model for multitrait-multimethod data (Werts & Linn, 1970; Andrews, 1984) is used for our analyses. In this study we reanalyse a part of data of another study (Kogovšek et al., 2002) done on a representative sample of the inhabitants of Ljubljana. The traits used in our article are the name interpreters. We consider egocentered network data as hierarchical; therefore a multilevel analysis is required. We use Muthen's partial maximum likelihood approach, called pseudobalanced solution (Muthen, 1989, 1990, 1994) which produces estimations close to maximum likelihood for large ego sample sizes (Hox & Mass, 2001). Several analyses will be done in order to compare this multilevel analysis to classic methods of analysis such as the ones made in Kogovšek et al. (2002), who analysed the data only at group (ego) level considering averages of all alters within the ego. We show that some of the results obtained by classic methods are biased and that multilevel analysis provides more detailed information that much enriches the interpretation of reliability and validity of hierarchical data. Within and between-ego reliabilities and validities and other related quality measures are defined, computed and interpreted
Resumo:
We demonstrate that it is possible to link multi-chain molecular dynamics simulations with the tube model using a single chain slip-links model as a bridge. This hierarchical approach allows significant speed up of simulations, permitting us to span the time scales relevant for a comparison with the tube theory. Fitting the mean-square displacement of individual monomers in molecular dynamics simulations with the slip-spring model, we show that it is possible to predict the stress relaxation. Then, we analyze the stress relaxation from slip-spring simulations in the framework of the tube theory. In the absence of constraint release, we establish that the relaxation modulus can be decomposed as the sum of contributions from fast and longitudinal Rouse modes, and tube survival. Finally, we discuss some open questions regarding possible future directions that could be profitable in rendering the tube model quantitative, even for mildly entangled polymers
Resumo:
The classical computer vision methods can only weakly emulate some of the multi-level parallelisms in signal processing and information sharing that takes place in different parts of the primates’ visual system thus enabling it to accomplish many diverse functions of visual perception. One of the main functions of the primates’ vision is to detect and recognise objects in natural scenes despite all the linear and non-linear variations of the objects and their environment. The superior performance of the primates’ visual system compared to what machine vision systems have been able to achieve to date, motivates scientists and researchers to further explore this area in pursuit of more efficient vision systems inspired by natural models. In this paper building blocks for a hierarchical efficient object recognition model are proposed. Incorporating the attention-based processing would lead to a system that will process the visual data in a non-linear way focusing only on the regions of interest and hence reducing the time to achieve real-time performance. Further, it is suggested to modify the visual cortex model for recognizing objects by adding non-linearities in the ventral path consistent with earlier discoveries as reported by researchers in the neuro-physiology of vision.