892 resultados para Formal and Substantial theory of the conflict of interests
Resumo:
Roberts, Michael. 'Recovering a lost inheritance: the marital economy and its absence from the Prehistory of Economics in Britain', in: 'The Marital Economy in Scandinavia and Britain 1400-1900', (Eds) Argen, Maria., Erickson, Amy Louise., Farnham: Ashgate, 2005, pp.239-256 RAE2008
Resumo:
Instytut Filologii Angielskiej
Resumo:
A neural theory is proposed in which visual search is accomplished by perceptual grouping and segregation, which occurs simultaneous across the visual field, and object recognition, which is restricted to a selected region of the field. The theory offers an alternative hypothesis to recently developed variations on Feature Integration Theory (Treisman, and Sato, 1991) and Guided Search Model (Wolfe, Cave, and Franzel, 1989). A neural architecture and search algorithm is specified that quantitatively explains a wide range of psychophysical search data (Wolfe, Cave, and Franzel, 1989; Cohen, and lvry, 1991; Mordkoff, Yantis, and Egeth, 1990; Treisman, and Sato, 1991).
Resumo:
Visual search data are given a unified quantitative explanation by a model of how spatial maps in the parietal cortex and object recognition categories in the inferotemporal cortex deploy attentional resources as they reciprocally interact with visual representations in the prestriate cortex. The model visual representations arc organized into multiple boundary and surface representations. Visual search in the model is initiated by organizing multiple items that lie within a given boundary or surface representation into a candidate search grouping. These items arc compared with object recognition categories to test for matches or mismatches. Mismatches can trigger deeper searches and recursive selection of new groupings until a target object io identified. This search model is algorithmically specified to quantitatively simulate search data using a single set of parameters, as well as to qualitatively explain a still larger data base, including data of Aks and Enns (1992), Bravo and Blake (1990), Chellazzi, Miller, Duncan, and Desimone (1993), Egeth, Viri, and Garbart (1984), Cohen and Ivry (1991), Enno and Rensink (1990), He and Nakayarna (1992), Humphreys, Quinlan, and Riddoch (1989), Mordkoff, Yantis, and Egeth (1990), Nakayama and Silverman (1986), Treisman and Gelade (1980), Treisman and Sato (1990), Wolfe, Cave, and Franzel (1989), and Wolfe and Friedman-Hill (1992). The model hereby provides an alternative to recent variations on the Feature Integration and Guided Search models, and grounds the analysis of visual search in neural models of preattentive vision, attentive object learning and categorization, and attentive spatial localization and orientation.
Resumo:
A key goal of computational neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how laminar neocortical circuits give rise to biological intelligence. These circuits embody two new and revolutionary computational paradigms: Complementary Computing and Laminar Computing. Circuit properties include a novel synthesis of feedforward and feedback processing, of digital and analog processing, and of pre-attentive and attentive processing. This synthesis clarifies the appeal of Bayesian approaches but has a far greater predictive range that naturally extends to self-organizing processes. Examples from vision and cognition are summarized. A LAMINART architecture unifies properties of visual development, learning, perceptual grouping, attention, and 3D vision. A key modeling theme is that the mechanisms which enable development and learning to occur in a stable way imply properties of adult behavior. It is noted how higher-order attentional constraints can influence multiple cortical regions, and how spatial and object attention work together to learn view-invariant object categories. In particular, a form-fitting spatial attentional shroud can allow an emerging view-invariant object category to remain active while multiple view categories are associated with it during sequences of saccadic eye movements. Finally, the chapter summarizes recent work on the LIST PARSE model of cognitive information processing by the laminar circuits of prefrontal cortex. LIST PARSE models the short-term storage of event sequences in working memory, their unitization through learning into sequence, or list, chunks, and their read-out in planned sequential performance that is under volitional control. LIST PARSE provides a laminar embodiment of Item and Order working memories, also called Competitive Queuing models, that have been supported by both psychophysical and neurobiological data. These examples show how variations of a common laminar cortical design can embody properties of visual and cognitive intelligence that seem, at least on the surface, to be mechanistically unrelated.
Resumo:
This article introduces a quantitative model of early visual system function. The model is formulated to unify analyses of spatial and temporal information processing by the nervous system. Functional constraints of the model suggest mechanisms analogous to photoreceptors, bipolar cells, and retinal ganglion cells, which can be formally represented with first order differential equations. Preliminary numerical simulations and analytical results show that the same formal mechanisms can explain the behavior of both X (linear) and Y (nonlinear) retinal ganglion cell classes by simple changes in the relative width of the receptive field (RF) center and surround mechanisms. Specifically, an increase in the width of the RF center results in a change from X-like to Y-like response, in agreement with anatomical data on the relationship between α- and
Resumo:
This thesis is focused on the application of numerical atomic basis sets in studies of the structural, electronic and transport properties of silicon nanowire structures from first-principles within the framework of Density Functional Theory. First we critically examine the applied methodology and then offer predictions regarding the transport properties and realisation of silicon nanowire devices. The performance of numerical atomic orbitals is benchmarked against calculations performed with plane waves basis sets. After establishing the convergence of total energy and electronic structure calculations with increasing basis size we have shown that their quality greatly improves with the optimisation of the contraction for a fixed basis size. The double zeta polarised basis offers a reasonable approximation to study structural and electronic properties and transferability exists between various nanowire structures. This is most important to reduce the computational cost. The impact of basis sets on transport properties in silicon nanowires with oxygen and dopant impurities have also been studied. It is found that whilst transmission features quantitatively converge with increasing contraction there is a weaker dependence on basis set for the mean free path; the double zeta polarised basis offers a good compromise whereas the single zeta basis set yields qualitatively reasonable results. Studying the transport properties of nanowire-based transistor setups with p+-n-p+ and p+-i-p+ doping profiles it is shown that charge self-consistency affects the I-V characteristics more significantly than the basis set choice. It is predicted that such ultrascaled (3 nm length) transistors would show degraded performance due to relatively high source-drain tunnelling currents. Finally, it is shown the hole mobility of Si nanowires nominally doped with boron decreases monotonically with decreasing width at fixed doping density and increasing dopant concentration. Significant mobility variations are identified which can explain experimental observations.
Resumo:
This thesis is structured in the format of a three part Portfolio of Exploration to facilitate transformation in my ways of knowing to enhance an experienced business practitioner’s capabilities and effectiveness. A key factor in my ways of knowing, as opposed to what I know, is my exploration of context and assumptions. By interacting with my cultural, intellectual, economic, and social history, I seek to become critically aware of the biographical, historical, and cultural context of my beliefs and feelings about myself. This Portfolio is not exclusively for historians of economics or historians of ideas but also for those interested in becoming more aware of how these culturally assimilated frames of reference and bundles of assumptions that influence the way they perceive, think, decide, feel and interpret their experiences in order to operate more effectively in their professional and organisational lives. In the first part of my Portfolio, I outline and reflect upon my Portfolio’s overarching theory of adult development; the writings of Harvard’s Robert Kegan and Columbia University’s Jack Mezirow. The second part delves further into how meaning-making, the activity of how one organises and makes sense of the world and how meaning-making evolves to different levels of complexity. I explore how past experience and our interpretations of history influences our understandings since all perception is inevitably tinged with bias and entrenched ‘theory-laden’ assumptions. In my third part, I explore the 1933 inaugural University College Dublin Finlay Lecture delivered by economist John Maynard Keynes. My findings provide a new perspective and understanding of Keynes’s 1933 lecture by not solely reading or relying upon the text of the three contextualised essay versions of his lecture. The purpose and context of Keynes’s original longer lecture version was quite different to the three shorter essay versions published for the American, British and German audiences.
Resumo:
This cultural history of Argentine crime fiction involves a comprehensive analysis of the literary and critical traditions within the genre, paying particular attention to the series of ‘aesthetic campaigns’ waged by Jorge Luis Borges and others during the period between 1933 and 1977. The methodological approach described in the introductory chapter builds upon the critical insight that in Argentina, generic discourse has consistently been the domain, not only of literary critics in the traditional mould, but also of prominent writers of fiction and specialists from other disciplines, effectively transcending the traditional tripartite ‘division of labour’ between writers, critics and readers. Chapter One charts the early development of crime fiction, and contextualises the evolution of the classical and hardboiled variants that were to provide a durable conceptual framework for discourse in the Argentine context. Chapter Two examines a number of pioneering early works by Argentine authors, before analysing Borges’ multi-faceted aesthetic campaign on behalf of the ‘classical’ detective story. Chapter Three examines a transitional period for the Argentine crime genre, book-ended by the three Vea y Lea magazine-sponsored detective story competitions that acted as a vital stimulus to innovation among Argentine writers. It includes a substantial treatment of the work of Rodolfo Walsh, documenting his transition from crime writer and anthologist to pioneer of the non-fiction novel and investigative journalism traditions. Chapter Four examines the period in which the novela negra came to achieve dominance in Argentina, in particular the aesthetic counter-campaigns conducted by Ricardo Piglia and others on behalf of the hard-boiled variant. The study concludes with a detailed analysis of Pablo Leonardo’s La mala guita (1976), which is considered as a paradigmatic example of crime fiction in Argentina in this period. The final chapter presents conclusions and a summary of the dissertation, and recommendations for further research.
Resumo:
We present a theory of hypoellipticity and unique ergodicity for semilinear parabolic stochastic PDEs with "polynomial" nonlinearities and additive noise, considered as abstract evolution equations in some Hilbert space. It is shown that if Hörmander's bracket condition holds at every point of this Hilbert space, then a lower bound on the Malliavin covariance operatorμt can be obtained. Informally, this bound can be read as "Fix any finite-dimensional projection on a subspace of sufficiently regular functions. Then the eigenfunctions of μt with small eigenvalues have only a very small component in the image of Π." We also show how to use a priori bounds on the solutions to the equation to obtain good control on the dependency of the bounds on the Malliavin matrix on the initial condition. These bounds are sufficient in many cases to obtain the asymptotic strong Feller property introduced in [HM06]. One of the main novel technical tools is an almost sure bound from below on the size of "Wiener polynomials," where the coefficients are possibly non-adapted stochastic processes satisfying a Lips chitz condition. By exploiting the polynomial structure of the equations, this result can be used to replace Norris' lemma, which is unavailable in the present context. We conclude by showing that the two-dimensional stochastic Navier-Stokes equations and a large class of reaction-diffusion equations fit the framework of our theory.
Resumo:
An event memory is a mental construction of a scene recalled as a single occurrence. It therefore requires the hippocampus and ventral visual stream needed for all scene construction. The construction need not come with a sense of reliving or be made by a participant in the event, and it can be a summary of occurrences from more than one encoding. The mental construction, or physical rendering, of any scene must be done from a specific location and time; this introduces a "self" located in space and time, which is a necessary, but need not be a sufficient, condition for a sense of reliving. We base our theory on scene construction rather than reliving because this allows the integration of many literatures and because there is more accumulated knowledge about scene construction's phenomenology, behavior, and neural basis. Event memory differs from episodic memory in that it does not conflate the independent dimensions of whether or not a memory is relived, is about the self, is recalled voluntarily, or is based on a single encoding with whether it is recalled as a single occurrence of a scene. Thus, we argue that event memory provides a clearer contrast to semantic memory, which also can be about the self, be recalled voluntarily, and be from a unique encoding; allows for a more comprehensive dimensional account of the structure of explicit memory; and better accounts for laboratory and real-world behavioral and neural results, including those from neuropsychology and neuroimaging, than does episodic memory.
Resumo:
This dissertation project explores some of the technical and musical challenges that face pianists in a collaborative role—specifically, those challenges that may be considered virtuosic in nature. The material was chosen from the works of Rachmaninoff and Ravel because of the technically and musically demanding yet idiomatic piano writing. This virtuosic piano writing also extends into the collaborative repertoire. The pieces were also chosen to demonstrate these virtuosic elements in a wide variety of settings. Solo piano pieces were chosen to provide a point of departure, and the programmed works ranged from vocal to two-piano, to sonatas and a piano trio. The recitals were arranged to demonstrate as much contrast as possible, while being grouped by composer. The first recital was performed on April 24, 2009. This recital featured five songs of Rachmaninoff, as well as three solo piano preludes and his Suite No. 2 for two pianos. The second recital occurred on November 16, 2010. This recital featured the music of both Rachmaninoff and Ravel, as well as a short lecture introducing the solo work “Ondine” from Gaspard de la nuit by Ravel. Following the lecture were the Cinq mélodies populaires grecques and the program closed with the substantial Rachmaninoff Sonata for Cello and Piano. The final program was given on October 10, 2011. This recital featured the music of Ravel, and it included his Sonata for Violin and Piano, the Debussy Nocturnes transcribed for two pianos by Ravel, and the Piano Trio. The inclusion of a transcription of a work by another composer highlights Ravel’s particular style of writing for the piano. All of these recitals were performed at the Gildenhorn Recital Hall in the Clarice Smith Performing Arts Center at the University of Maryland. The recitals are recorded on compact discs, which can be found in the Digital Repository at the University of Maryland (DRUM).
Resumo:
Absolute line intensities in the v6 and v8 interacting bands of trans-HCOOH, observed near 1105.4 and 1033.5 cm -1, respectively, and the dissociation constant of the formic acid dimer (HCOOH)2 have been measured using Fourier transform spectroscopy at a resolution of 0.002 cm-1. Eleven spectra of formic acid, at 296.0(5) K and pressures ranging from 14.28(25) to 314.0(24) Pa, have been recorded between 600 and 1900 cm-1 with an absorption path length of 19.7(2) cm. 437 integrated absorption coefficients have been measured for 72 lines in the v6 band. Analysis of the pressure dependence yielded the dissociation constant of the formic acid dimer, k p=361(45) Pa, and the absolute intensity of the 72 lines of HCOOH. The accuracy of these results was carefully estimated. The absolute intensities of four lines of the weak v8 band were also measured. Using an appropriate theory, the integrated intensity of the v6 and v 8 bands was determined to be 3.47 × 1017 and 4.68 × 10-19 cm-1/(molecule cm-1) respectively, at 296 K. Both the dissociation constant and integrated intensities were compared to earlier measurements. © 2007 American Institute of Physics.
Resumo:
Lennart Åqvist (1992) proposed a logical theory of legal evidence, based on the Bolding-Ekelöf of degrees of evidential strength. This paper reformulates Åqvist's model in terms of the probabilistic version of the kappa calculus. Proving its acceptability in the legal context is beyond the present scope, but the epistemological debate about Bayesian Law isclearly relevant. While the present model is a possible link to that lineof inquiry, we offer some considerations about the broader picture of thepotential of AI & Law in the evidentiary context. Whereas probabilisticreasoning is well-researched in AI, calculations about the threshold ofpersuasion in litigation, whatever their value, are just the tip of theiceberg. The bulk of the modeling desiderata is arguably elsewhere, if one isto ideally make the most of AI's distinctive contribution as envisaged forlegal evidence research.