885 resultados para post-structural theory
Resumo:
Network analysis naturally relies on graph theory and, more particularly, on the use of node and edge metrics to identify the salient properties in graphs. When building visual maps of networks, these metrics are turned into useful visual cues or are used interactively to filter out parts of a graph while querying it, for instance. Over the years, analysts from different application domains have designed metrics to serve specific needs. Network science is an inherently cross-disciplinary field, which leads to the publication of metrics with similar goals; different names and descriptions of their analytics often mask the similarity between two metrics that originated in different fields. Here, we study a set of graph metrics and compare their relative values and behaviors in an effort to survey their potential contributions to the spatial analysis of networks.
Resumo:
his paper proposes a structural investigation of the Turtle Mountain anticline (Alberta, Canada) to better understand the role of the different tectonic features on the development of both local and large scale rock slope instabilities occurring in Turtle Mountain. The study area is investigated by combining remote methods with detailed field surveys. In particular, the benefit of Terrestrial Laser Scanning for ductile and brittle tectonic structure interpretations is illustrated. The proposed tectonic interpretation allows the characterization of the fracturing pattern, the fold geometry and the role of these tectonic features in rock slope instability development. Ten discontinuity sets are identified in the study area, their local variations permitting the differentiation of the study zone into 20 homogenous structural domains. The anticline is described as an eastern verging fold that displays considerable geometry differences along its axis and developed by both flexural slip and tangential longitudinal strain folding mechanisms. Moreover, the origins of the discontinuity sets are determined according to the tectonic phases affecting the region (pre-folding, folding, post-folding). The localization and interpretation of kinematics of the different instabilities revealed the importance of considering the discrete brittle planes of weakness, which largely control the kinematic release of the local instabilities, and also the rock mass damage induced by large tectonic structures (fold hinge, thrust).
Resumo:
ABSTRACT This dissertation investigates the, nature of space-time as described by the theory of general relativity. It mainly argues that space-time can be naturally interpreted as a physical structure in the precise sense of a network of concrete space-time relations among concrete space-time points that do not possess any intrinsic properties and any intrinsic identity. Such an interpretation is fundamentally based on two related key features of general relativity, namely substantive general covariance and background independence, where substantive general covariance is understood as a gauge-theoretic invariance under active diffeomorphisms and background independence is understood in the sense that the metric (or gravitational) field is dynamical and that, strictly speaking, it cannot be uniquely split into a purely gravitational part and a fixed purely inertial part or background. More broadly, a precise notion of (physical) structure is developed within the framework of a moderate version of structural realism understood as a metaphysical claim about what there is in the world. So, the developement of this moderate structural realism pursues two main aims. The first is purely metaphysical, the aim being to develop a coherent metaphysics of structures and of objects (particular attention is paid to the questions of identity and individuality of these latter within this structural realist framework). The second is to argue that moderate structural realism provides a convincing interpretation of the world as described by fundamental physics and in particular of space-time as described by general relativity. This structuralist interpretation of space-time is discussed within the traditional substantivalist-relationalist debate, which is best understood within the broader framework of the question about the relationship between space-time on the one hand and matter on the other. In particular, it is claimed that space-time structuralism does not constitute a 'tertium quid' in the traditional debate. Some new light on the question of the nature of space-time may be shed from the fundamental foundational issue of space-time singularities. Their possible 'non-local' (or global) feature is discussed in some detail and it is argued that a broad structuralist conception of space-time may provide a physically meaningful understanding of space-time singularities, which is not plagued by the conceptual difficulties of the usual atomsitic framework. Indeed, part of these difficulties may come from the standard differential geometric description of space-time, which encodes to some extent this atomistic framework; it raises the question of the importance of the mathematical formalism for the interpretation of space-time.
Resumo:
The Wechsler Intelligence Scale for Children-fourth edition (i.e. WISC-IV) recognizes a four-factor scoring structure in addition to the Full Scale IQ (FSIQ) score: Verbal Comprehension (VCI), Perceptual Reasoning (PRI), Working Memory (WMI), and Processing Speed (PSI) indices. However, several authors suggested that models based on the Cattell-Horn-Carroll (CHC) theory with 5 or 6 factors provided a better fit to the data than does the current four-factor solution. By comparing the current four-factor structure to CHC-based models, this research aimed to investigate the factorial structure and the constructs underlying the WISC-IV subtest scores with French-speaking Swiss children (N = 249). To deal with this goal, confirmatory factor analyses (CFAs) were conducted. Results showed that a CHC-based model with five factors better fitted the French-Swiss data than did the current WISC-IV scoring structure. All together, these results support the hypothesis of the appropriateness of the CHC model with French-speaking children.
Resumo:
The present study examines the Five-Factor Model (FFM) of personality and locus of control in French-speaking samples in Burkina Faso (N = 470) and Switzerland (Ns = 1,090, 361), using the Revised NEO Personality Inventory (NEO-PI-R) and Levenson's Internality, Powerful others, and Chance (IPC) scales. Alpha reliabilities were consistently lower in Burkina Faso, but the factor structure of the NEO-PI-R was replicated in both cultures. The intended three-factor structure of the IPC could not be replicated, although a two-factor solution was replicable across the two samples. Although scalar equivalence has not been demonstrated, mean level comparisons showed the hypothesized effects for most of the five factors and locus of control; Burkinabè scored higher in Neuroticism than anticipated. Findings from this African sample generally replicate earlier results from Asian and Western cultures, and are consistent with a biologically-based theory of personality.
Resumo:
A hallmark of group/species A rotavirus (RVA) replication in MA-104 cells is the logarithmic increase in viral mRNAs that occurs four-12 h post-infection. Viral protein synthesis typically lags closely behind mRNA synthesis but continues after mRNA levels plateau. However, RVA non-structural protein 1 (NSP1) is present at very low levels throughout viral replication despite showing robust protein synthesis. NSP1 has the contrasting properties of being susceptible to proteasomal degradation, but being stabilised against proteasomal degradation by viral proteins and/or viral mRNAs. We aimed to determine the kinetics of the accumulation and intracellular distribution of NSP1 in MA-104 cells infected with rhesus rotavirus (RRV). NSP1 preferentially localises to the perinuclear region of the cytoplasm of infected cells, forming abundant granules that are heterogeneous in size. Late in infection, large NSP1 granules predominate, coincident with a shift from low to high NSP1 expression levels. Our results indicate that rotavirus NSP1 is a late viral protein in MA-104 cells infected with RRV, presumably as a result of altered protein turnover.
Resumo:
Taking on the challenge of understanding and explaining the Symphony of (today’s) New World in realistic terms (not realist), this essay aims to analyse the Post-Cold war era by devising a multi-conceptual framework that combines different theoretical contributions not yet linked in a fully explanatory way. This paper suggests two inter-related analytical contexts (or background melodies) to understand Dvorak´s "New World”. First, the socio-economic structural context that falls under the controversial category of Globalization and, second, the post-modern political structural context that is built on Robert Cooper’s threefold analysis (Pre-modern, Modern and Post-modern) of today’s world [Cooper, R: 1997, 1999]. Lastly, the closing movement (allegro con fuoco) enters the normative arena to assess American foreign policy options in the light of the theoretical framework devised in the first part of the essay.
Resumo:
Customer satisfaction and retention are key issues for organizations in today’s competitive market place. As such, much research and revenue has been invested in developing accurate ways of assessing consumer satisfaction at both the macro (national) and micro (organizational) level, facilitating comparisons in performance both within and between industries. Since the instigation of the national customer satisfaction indices (CSI), partial least squares (PLS) has been used to estimate the CSI models in preference to structural equation models (SEM) because they do not rely on strict assumptions about the data. However, this choice was based upon some misconceptions about the use of SEM’s and does not take into consideration more recent advances in SEM, including estimation methods that are robust to non-normality and missing data. In this paper, both SEM and PLS approaches were compared by evaluating perceptions of the Isle of Man Post Office Products and Customer service using a CSI format. The new robust SEM procedures were found to be advantageous over PLS. Product quality was found to be the only driver of customer satisfaction, while image and satisfaction were the only predictors of loyalty, thus arguing for the specificity of postal services
Resumo:
Hereditary non-structural diseases such as catecholaminergic polymorphic ventricular tachycardia (CPVT), long QT, and the Brugada syndrome as well as structural disease such as hypertrophic cardiomyopathy (HCM) and arrhythmogenic right ventricular cardiomyopathy (ARVC) cause a significant percentage of sudden cardiac deaths in the young. In these cases, genetic testing can be useful and does not require proxy consent if it is carried out at the request of judicial authorities as part of a forensic death investigation. Mutations in several genes are implicated in arrhythmic syndromes, including SCN5A, KCNQ1, KCNH2, RyR2, and genes causing HCM. If the victim's test is positive, this information is important for relatives who might be themselves at risk of carrying the disease-causing mutation. There is no consensus about how professionals should proceed in this context. This article discusses the ethical and legal arguments in favour of and against three options: genetic testing of the deceased victim only; counselling of relatives before testing the victim; counselling restricted to relatives of victims who tested positive for mutations of serious and preventable diseases. Legal cases are mentioned that pertain to the duty of geneticists and other physicians to warn relatives. Although the claim for a legal duty is tenuous, recent publications and guidelines suggest that geneticists and others involved in the multidisciplinary approach of sudden death (SD) cases may, nevertheless, have an ethical duty to inform relatives of SD victims. Several practical problems remain pertaining to the costs of testing, the counselling and to the need to obtain permission of judicial authorities.
Resumo:
We propose a method to evaluate cyclical models which does not require knowledge of the DGP and the exact empirical specification of the aggregate decision rules. We derive robust restrictions in a class of models; use some to identify structural shocks and others to evaluate the model or contrast sub-models. The approach has good size and excellent power properties, even in small samples. We show how to examine the validity of a class of models, sort out the relevance of certain frictions, evaluate the importance of an added feature, and indirectly estimate structural parameters.
Resumo:
Estimates for the U.S. suggest that at least in some sectors productivity enhancing reallocationis the dominant factor in accounting for producitivity growth. An open question, particularlyrelevant for developing countries, is whether reallocation is always productivity enhancing. Itmay be that imperfect competition or other barriers to competitive environments imply that thereallocation process is not fully e?cient in these countries. Using a unique plant-levellongitudinal dataset for Colombia for the period 1982-1998, we explore these issues byexamining the interaction between market allocation, and productivity and profitability.Moreover, given the important trade, labor and financial market reforms in Colombia during theearly 1990's, we explore whether and how the contribution of reallocation changed over theperiod of study. Our data permit measurement of plant-level quantities and prices. Takingadvantage of the rich structure of our price data, we propose a sequential mehodology to estimateproductivity and demand shocks at the plant level. First, we estimate total factor productivity(TFP) with plant-level physical output data, where we use downstream demand to instrumentinputs. We then turn to estimating demand shocks and mark-ups with plant-level price data, usingTFP to instrument for output in the inversedemand equation. We examine the evolution of thedistributions of TFP and demand shocks in response to the market reforms in the 1990's. We findthat market reforms are associated with rising overall productivity that is largely driven byreallocation away from low- and towards highproductivity businesses. In addition, we find thatthe allocation of activity across businesses is less driven by demand factors after reforms. Wefind that the increase in aggregate productivity post-reform is entirely accounted for by theimproved allocation of activity.
Resumo:
A method to evaluate cyclical models not requiring knowledge of the DGP and the exact specificationof the aggregate decision rules is proposed. We derive robust restrictions in a class of models; use someto identify structural shocks in the data and others to evaluate the class or contrast sub-models. Theapproach has good properties, even in small samples, and when the class of models is misspecified. Themethod is used to sort out the relevance of a certain friction (the presence of rule-of-thumb consumers)in a standard class of models.
Resumo:
The interpretation of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) is based on a 4-factor model, which is only partially compatible with the mainstream Cattell-Horn-Carroll (CHC) model of intelligence measurement. The structure of cognitive batteries is frequently analyzed via exploratory factor analysis and/or confirmatory factor analysis. With classical confirmatory factor analysis, almost all crossloadings between latent variables and measures are fixed to zero in order to allow the model to be identified. However, inappropriate zero cross-loadings can contribute to poor model fit, distorted factors, and biased factor correlations; most important, they do not necessarily faithfully reflect theory. To deal with these methodological and theoretical limitations, we used a new statistical approach, Bayesian structural equation modeling (BSEM), among a sample of 249 French-speaking Swiss children (8-12 years). With BSEM, zero-fixed cross-loadings between latent variables and measures are replaced by approximate zeros, based on informative, small-variance priors. Results indicated that a direct hierarchical CHC-based model with 5 factors plus a general intelligence factor better represented the structure of the WISC-IV than did the 4-factor structure and the higher order models. Because a direct hierarchical CHC model was more adequate, it was concluded that the general factor should be considered as a breadth rather than a superordinate factor. Because it was possible for us to estimate the influence of each of the latent variables on the 15 subtest scores, BSEM allowed improvement of the understanding of the structure of intelligence tests and the clinical interpretation of the subtest scores.
Resumo:
Intuitively, we think of perception as providing us with direct cognitive access to physical objects and their properties. But this common sense picture of perception becomes problematic when we notice that perception is not always veridical. In fact, reflection on illusions and hallucinations seems to indicate that perception cannot be what it intuitively appears to be. This clash between intuition and reflection is what generates the puzzle of perception. The task and enterprise of unravelling this puzzle took, and still takes, centre stage in the philosophy of perception. The goal of my dissertation is to make a contribution to this enterprise by formulating and defending a new structural approach to perception and perceptual consciousness. The argument for my structural approach is developed in several steps. Firstly, I develop an empirically inspired causal argument against naïve and direct realist conceptions of perceptual consciousness. Basically, the argument says that perception and hallucination can have the same proximal causes and must thus belong to the same mental kind. I emphasise that this insight gives us good reasons to abandon what we are instinctively driven to believe - namely that perception is directly about the outside physical world. The causal argument essentially highlights that the information that the subject acquires in perceiving a worldly object is always indirect. To put it another way, the argument shows that what we, as perceivers, are immediately aware of, is not an aspect of the world but an aspect of our sensory response to it. A view like this is traditionally known as a Representative Theory of Perception. As a second step, emphasis is put on the task of defending and promoting a new structural version of the Representative Theory of Perception; one that is immune to some major objections that have been standardly levelled at other Representative Theories of Perception. As part of this defence and promotion, I argue that it is only the structural features of perceptual experiences that are fit to represent the empirical world. This line of thought is backed up by a detailed study of the intriguing phenomenon of synaesthesia. More precisely, I concentrate on empirical cases of synaesthetic experiences and argue that some of them provide support for a structural approach to perception. The general picture that emerges in this dissertation is a new perspective on perceptual consciousness that is structural through and through.
Resumo:
We consider an economy where the production technology has constantreturns to scale but where in the descentralized equilibrium thereare aggregate increasing returns to scale. The result follows froma positive contracting externality among firms. If a firms issurrounded by more firms, employees have more opportunitiesoutside their own firm. This improves employees' incentives toinvest in the presence of ex post renegotiation at the firm level,at not cost. Our leading result is that if a region is sparselypopulated or if the degree of development in the region is lowenough, there are multiple equilibria in the level of sectorialemployment. From the theoretical model we derive a non-linearfirst-order censored difference equation for sectoral employment.Our results are strongly consistent with the multiple equilibriahypothesis and the existence of a sectoral critical scale (belowwich the sector follows a delocation process). The scale of theregions' population and the degree of development reduce thecritical scale of the sector.