873 resultados para One Over Many Argument
Resumo:
Over the last ten years our understanding of early spatial vision has improved enormously. The long-standing model of probability summation amongst multiple independent mechanisms with static output nonlinearities responsible for masking is obsolete. It has been replaced by a much more complex network of additive, suppressive, and facilitatory interactions and nonlinearities across eyes, area, spatial frequency, and orientation that extend well beyond the classical recep-tive field (CRF). A review of a substantial body of psychophysical work performed by ourselves (20 papers), and others, leads us to the following tentative account of the processing path for signal contrast. The first suppression stage is monocular, isotropic, non-adaptable, accelerates with RMS contrast, most potent for low spatial and high temporal frequencies, and extends slightly beyond the CRF. Second and third stages of suppression are difficult to disentangle but are possibly pre- and post-binocular summation, and involve components that are scale invariant, isotropic, anisotropic, chromatic, achromatic, adaptable, interocular, substantially larger than the CRF, and saturated by contrast. The monocular excitatory pathways begin with half-wave rectification, followed by a preliminary stage of half-binocular summation, a square-law transducer, full binocular summation, pooling over phase, cross-mechanism facilitatory interactions, additive noise, linear summation over area, and a slightly uncertain decision-maker. The purpose of each of these interactions is far from clear, but the system benefits from area and binocular summation of weak contrast signals as well as area and ocularity invariances above threshold (a herd of zebras doesn't change its contrast when it increases in number or when you close one eye). One of many remaining challenges is to determine the stage or stages of spatial tuning in the excitatory pathway.
Resumo:
The ERS-1 Satellite was launched in July 1991 by the European Space Agency into a polar orbit at about 800 km, carrying a C-band scatterometer. A scatterometer measures the amount of backscatter microwave radiation reflected by small ripples on the ocean surface induced by sea-surface winds, and so provides instantaneous snap-shots of wind flow over large areas of the ocean surface, known as wind fields. Inherent in the physics of the observation process is an ambiguity in wind direction; the scatterometer cannot distinguish if the wind is blowing toward or away from the sensor device. This ambiguity implies that there is a one-to-many mapping between scatterometer data and wind direction. Current operational methods for wind field retrieval are based on the retrieval of wind vectors from satellite scatterometer data, followed by a disambiguation and filtering process that is reliant on numerical weather prediction models. The wind vectors are retrieved by the local inversion of a forward model, mapping scatterometer observations to wind vectors, and minimising a cost function in scatterometer measurement space. This thesis applies a pragmatic Bayesian solution to the problem. The likelihood is a combination of conditional probability distributions for the local wind vectors given the scatterometer data. The prior distribution is a vector Gaussian process that provides the geophysical consistency for the wind field. The wind vectors are retrieved directly from the scatterometer data by using mixture density networks, a principled method to model multi-modal conditional probability density functions. The complexity of the mapping and the structure of the conditional probability density function are investigated. A hybrid mixture density network, that incorporates the knowledge that the conditional probability distribution of the observation process is predominantly bi-modal, is developed. The optimal model, which generalises across a swathe of scatterometer readings, is better on key performance measures than the current operational model. Wind field retrieval is approached from three perspectives. The first is a non-autonomous method that confirms the validity of the model by retrieving the correct wind field 99% of the time from a test set of 575 wind fields. The second technique takes the maximum a posteriori probability wind field retrieved from the posterior distribution as the prediction. For the third technique, Markov Chain Monte Carlo (MCMC) techniques were employed to estimate the mass associated with significant modes of the posterior distribution, and make predictions based on the mode with the greatest mass associated with it. General methods for sampling from multi-modal distributions were benchmarked against a specific MCMC transition kernel designed for this problem. It was shown that the general methods were unsuitable for this application due to computational expense. On a test set of 100 wind fields the MAP estimate correctly retrieved 72 wind fields, whilst the sampling method correctly retrieved 73 wind fields.
Resumo:
Universities are increasingly diverse places; in terms of staff and students, their nationality, ethnicity and religious backgrounds. HEIs need to find ways of ensuring that this diversity adds to the life of the institution and to the development of graduates as employees in a global workplace. The paper offers a case study of one way of developing an intercultural strategy at a UK university. The university concerned has a highly multicultural and multinational staff and student population. Over many years the university has worked to celebrate and embed this diversity into the culture and values of the institution; in its learning, teaching, business operations and relationships. The university wished to develop its intercultural awareness strategy in an inspirational and vibrant way, one which was informed by research and practice. The paper proposes a new integrative approach to developing an intercultural strategy, and summarises some reflections on the process of creating the intercultural awareness strategy which may be of use to other institutions. Analysis showed that in order to make the strategy effective there had to be commitment from senior management to match innovative practices at an individual level. It is also clear that such a strategy must include formal policies and procedures, as well as more informal channels to allow people to express intercultural differences and shared values. The critical role of middle management in strategy implementation is also discussed.
Resumo:
The Lukumí people of Cuba, currently known as Yoruba, are descendants of one of the mightiest West African kingdoms, the Oyo, Empire. The Oyo-Yoruba were important cultural contributors to certain areas of the New World such as Cuba, Brazil, Trinidad, and to some degree Haiti and the Lesser Antilles. Anthropologist William Bascom has said that “no African group has had greater influence on New World culture than the Yoruba.” ^ After the devastation of the empire around 1825, two new Oyos resuscitated. The first, New Oyo, was established about 80 miles south of the ancient site around 1830. The second Oyo was instituted on the other side of the Atlantic Ocean, in the city of Havana and its surrounding towns. Much of Oyo lie, as ancient Oyo is now called, was transported to the New World, reformed and adapted according to its new surroundings, and, it preserved its reign over its “subjects” through the retention and dissemination of its cultural and religious practices. ^ Using an interdisciplinary approach, this investigation will argue that of all the African groups brought to Cuba, the Oyo-Yoruba were the most influential in shaping Afro-Cuban culture since their introduction in the nineteenth century. The existence of batá drums in Cuba and the cultural components of this musical genre will serve as one of many examples to illustrate the vitality of Oyo cultural hegemony over Afro-Cubans. It is arguable that these drums and the culture that surrounded them were very important instruments used by the Oyo to counter the acculturation of many Africans in Cuba. Likewise, this culture became acculturative in itself by imposing its religious world views on non-Oyo ethnics and their descendants. Oral histories and narratives collected among Lukumí practitioners on the island and abroad have been invaluable archives to supplement and/or complement primary and secondary sources of information. ^
Resumo:
Expositions of student work at the end of the extended school year are one of many reform efforts in a specially formed School Improvement Zone in Miami Dade schools. This descriptive analysis offers examples of successful attempts to engender pride even in the face of formidable social and cultural obstacles.
Resumo:
Oscillating Water Column (OWC) is one type of promising wave energy devices due to its obvious advantage over many other wave energy converters: no moving component in sea water. Two types of OWCs (bottom-fixed and floating) have been widely investigated, and the bottom-fixed OWCs have been very successful in several practical applications. Recently, the proposal of massive wave energy production and the availability of wave energy have pushed OWC applications from near-shore to deeper water regions where floating OWCs are a better choice. For an OWC under sea waves, the air flow driving air turbine to generate electricity is a random process. In such a working condition, single design/operation point is nonexistent. To improve energy extraction, and to optimise the performance of the device, a system capable of controlling the air turbine rotation speed is desirable. To achieve that, this paper presents a short-term prediction of the random, process by an artificial neural network (ANN), which can provide near-future information for the control system. In this research, ANN is explored and tuned for a better prediction of the airflow (as well as the device motions for a wide application). It is found that, by carefully constructing ANN platform and optimizing the relevant parameters, ANN is capable of predicting the random process a few steps ahead of the real, time with a good accuracy. More importantly, the tuned ANN works for a large range of different types of random, process.
Resumo:
Here we show the use of the 210Pb-226Ra excess method to determine the growth rate of corals from one of the world's largest known cold-water coral reef, the Røst Reef off Norway. Two large branching framework-forming cold-water coral specimens, one Lophelia pertusa and one Madrepora oculata were collected alive at 350 m water depth from the Røst Reef at ~67° N and ~9° E. Pb and Ra isotopes were measured along the major growth axis of both specimens using low level alpha and gamma spectrometry and the corals trace element compositions were studied using ICP-QMS. Due to the different chemical behaviors of Pb and Ra in the marine environment, 210Pb and 226Ra were not incorporated the same way into the aragonite skeleton of those two cold-water corals. Thus to assess of the growth rates of both specimens we have here taken in consideration the exponential decrease of initially incorporated 210Pb as well as the ingrowth of 210Pb from the decay of 226Ra. Moreover a~post-depositional 210Pb incorporation is found in relation to the Mn-Fe coatings that could not be entirely removed from the oldest parts of the skeletons. The 226Ra activities in both corals were fairly constant, then assuming constant uptake of 210Pb through time the 210Pb-226Ra chronology can be applied to calculate linear growth rate. The 45.5 cm long branch of M. oculata reveals an age of 31 yr and a~linear growth rate of 14.4 ± 1.1 mm yr-1, i.e. 2.6 polyps per year. However, a correction regarding a remaining post-depositional Mn-Fe oxide coating is needed for the base of the specimen. The corrected age tend to confirm the radiocarbon derived basal age of 40 yr (using 14C bomb peak) with a mean growth rate of 2 polyps yr-1. This rate is similar to the one obtained in Aquaria experiments under optimal growth conditions. For the 80 cm-long specimen of L. pertusa a remaining contamination of metal-oxides is observed for the middle and basal part of the coral skeleton, inhibiting similar accurate age and growth rate estimates. However, the youngest branch was free of Mn enrichment and this 15 cm section reveals a growth rate of 8 mm yr-1 (~1 polyp every two to three years). However, the 210Pb growth rate estimate is within the lowermost ranges of previous growth rate estimates and may thus reflect that the coral was not developing at optimal growth conditions. Overall, 210Pb-226Ra dating can be successfully applied to determine the age and growth rate of framework-forming cold-water corals, however, removal of post-depositional Mn-Fe oxide deposits is a prerequisite. If successful, large branching M. oculata and L. pertusa coral skeletons provide unique oceanographic archive for studies of intermediate water environmentals with an up to annual time resolution and spanning over many decades.
Resumo:
This thesis focuses on “livsfrågor” (questions of life) a typical Swedish concept introduced in the RE syllabus in the curriculum for compulsory schools in 1969. The study poses three questions: what can qualify as a “livsfråga”, why are they regarded important, and how do they fit into teaching? The main purpose is to study differences of the concept in two materials. Primarily interviews with Teacher educators all over Sweden and, secondly in the R.E. syllabus for compulsory and secondary schools from 1962 until today. Finally, the two materials used, will be brought together, and foci are recognized with the help of a tool for thought. The study is using the concept dialogicity from Bachtin. Syllabus are viewed as compromises in accordance with a German tradition. In the syllabus, “livsfrågor” is one within many different words used with none what so ever stringency. It is not necessarily the most important term, as “livsåskådningsfrågor” (questions within philosophies of life) is often dominating in objectivities. Also “existential questions” etc is used. The relation between the words are never made clear. The syllabus are in one sense monologial as different meanings of the word are not made explicit, and other utterances are not invoked. In the interviews the dialogicity is more obvious. Philosophy is mentioned, eg.. Martin Buber, Viktor Frankl, theology (Paul Thillich), but also literature (Lars Gyllensten) and existentialism in a general sence. Other words are not as frequent – but “livsåskådningsfrågor” are of course mentioned, eg. Faith vs. knowledge. In the last chapter “livsfrågor” is problematized with the help of Andrew Wright and his three metanarrativies within the modern R.E. And the assumption, especially in the syllabus, of “livsfrågor”, as common between cultures and over time is problematized with the help of . feministic theory of knowledge.
Resumo:
A lightweight Java application suite has been developed and deployed allowing collaborative learning between students and tutors at remote locations. Students can engage in group activities online and also collaborate with tutors. A generic Java framework has been developed and applied to electronics, computing and mathematics education. The applications are respectively: (a) a digital circuit simulator, which allows students to collaborate in building simple or complex electronic circuits; (b) a Java programming environment where the paradigm is behavioural-based robotics, and (c) a differential equation solver useful in modelling of any complex and nonlinear dynamic system. Each student sees a common shared window on which may be added text or graphical objects and which can then be shared online. A built-in chat room supports collaborative dialogue. Students can work either in collaborative groups or else in teams as directed by the tutor. This paper summarises the technical architecture of the system as well as the pedagogical implications of the suite. A report of student evaluation is also presented distilled from use over a period of twelve months. We intend this suite to facilitate learning between groups at one or many institutions and to facilitate international collaboration. We also intend to use the suite as a tool to research the establishment and behaviour of collaborative learning groups. We shall make our software freely available to interested researchers.
Resumo:
Microsecond long Molecular Dynamics (MD) trajectories of biomolecular processes are now possible due to advances in computer technology. Soon, trajectories long enough to probe dynamics over many milliseconds will become available. Since these timescales match the physiological timescales over which many small proteins fold, all atom MD simulations of protein folding are now becoming popular. To distill features of such large folding trajectories, we must develop methods that can both compress trajectory data to enable visualization, and that can yield themselves to further analysis, such as the finding of collective coordinates and reduction of the dynamics. Conventionally, clustering has been the most popular MD trajectory analysis technique, followed by principal component analysis (PCA). Simple clustering used in MD trajectory analysis suffers from various serious drawbacks, namely, (i) it is not data driven, (ii) it is unstable to noise and change in cutoff parameters, and (iii) since it does not take into account interrelationships amongst data points, the separation of data into clusters can often be artificial. Usually, partitions generated by clustering techniques are validated visually, but such validation is not possible for MD trajectories of protein folding, as the underlying structural transitions are not well understood. Rigorous cluster validation techniques may be adapted, but it is more crucial to reduce the dimensions in which MD trajectories reside, while still preserving their salient features. PCA has often been used for dimension reduction and while it is computationally inexpensive, being a linear method, it does not achieve good data compression. In this thesis, I propose a different method, a nonmetric multidimensional scaling (nMDS) technique, which achieves superior data compression by virtue of being nonlinear, and also provides a clear insight into the structural processes underlying MD trajectories. I illustrate the capabilities of nMDS by analyzing three complete villin headpiece folding and six norleucine mutant (NLE) folding trajectories simulated by Freddolino and Schulten [1]. Using these trajectories, I make comparisons between nMDS, PCA and clustering to demonstrate the superiority of nMDS. The three villin headpiece trajectories showed great structural heterogeneity. Apart from a few trivial features like early formation of secondary structure, no commonalities between trajectories were found. There were no units of residues or atoms found moving in concert across the trajectories. A flipping transition, corresponding to the flipping of helix 1 relative to the plane formed by helices 2 and 3 was observed towards the end of the folding process in all trajectories, when nearly all native contacts had been formed. However, the transition occurred through a different series of steps in all trajectories, indicating that it may not be a common transition in villin folding. The trajectories showed competition between local structure formation/hydrophobic collapse and global structure formation in all trajectories. Our analysis on the NLE trajectories confirms the notion that a tight hydrophobic core inhibits correct 3-D rearrangement. Only one of the six NLE trajectories folded, and it showed no flipping transition. All the other trajectories get trapped in hydrophobically collapsed states. The NLE residues were found to be buried deeply into the core, compared to the corresponding lysines in the villin headpiece, thereby making the core tighter and harder to undo for 3-D rearrangement. Our results suggest that the NLE may not be a fast folder as experiments suggest. The tightness of the hydrophobic core may be a very important factor in the folding of larger proteins. It is likely that chaperones like GroEL act to undo the tight hydrophobic core of proteins, after most secondary structure elements have been formed, so that global rearrangement is easier. I conclude by presenting facts about chaperone-protein complexes and propose further directions for the study of protein folding.
Resumo:
Secure transmission of bulk data is of interest to many content providers. A commercially-viable distribution of content requires technology to prevent unauthorised access. Encryption tools are powerful, but have a performance cost. Without encryption, intercepted data may be illicitly duplicated and re-sold, or its commercial value diminished because its secrecy is lost. Two technical solutions make it possible to perform bulk transmissions while retaining security without too high a performance overhead. These are: 1. a) hierarchical encryption - the stronger the encryption, the harder it is to break but also the more computationally expensive it is. A hierarchical approach to key exchange means that simple and relatively weak encryption and keys are used to encrypt small chunks of data, for example 10 seconds of video. Each chunk has its own key. New keys for this bottom-level encryption are exchanged using a slightly stronger encryption, for example a whole-video key could govern the exchange of the 10-second chunk keys. At a higher level again, there could be daily or weekly keys, securing the exchange of whole-video keys, and at a yet higher level, a subscriber key could govern the exchange of weekly keys. At higher levels, the encryption becomes stronger but is used less frequently, so that the overall computational cost is minimal. The main observation is that the value of each encrypted item determines the strength of the key used to secure it. 2. b) non-symbolic fragmentation with signal diversity - communications are usually assumed to be sent over a single communications medium, and the data to have been encrypted and/or partitioned in whole-symbol packets. Network and path diversity break up a file or data stream into fragments which are then sent over many different channels, either in the same network or different networks. For example, a message could be transmitted partly over the phone network and partly via satellite. While TCP/IP does a similar thing in sending different packets over different paths, this is done for load-balancing purposes and is invisible to the end application. Network and path diversity deliberately introduce the same principle as a secure communications mechanism - an eavesdropper would need to intercept not just one transmission path but all paths used. Non-symbolic fragmentation of data is also introduced to further confuse any intercepted stream of data. This involves breaking up data into bit strings which are subsequently disordered prior to transmission. Even if all transmissions were intercepted, the cryptanalyst still needs to determine fragment boundaries and correctly order them. These two solutions depart from the usual idea of data encryption. Hierarchical encryption is an extension of the combined encryption of systems such as PGP but with the distinction that the strength of encryption at each level is determined by the "value" of the data being transmitted. Non- symbolic fragmentation suppresses or destroys bit patterns in the transmitted data in what is essentially a bit-level transposition cipher but with unpredictable irregularly-sized fragments. Both technologies have applications outside the commercial and can be used in conjunction with other forms of encryption, being functionally orthogonal.
Resumo:
Caspian Sea with its unique characteristics is a significant source to supply required heat and moisture for passing weather systems over the north of Iran. Investigation of heat and moisture fluxes in the region and their effects on these systems that could lead to floods and major financial and human losses is essential in weather forecasting. Nowadays by improvement of numerical weather and climate prediction models and the increasing need to more accurate forecasting of heavy rainfall, the evaluation and verification of these models has been become much more important. In this study we have used the WRF model as a research-practical one with many valuable characteristics and flexibilities. In this research, the effects of heat and moisture fluxes of Caspian Sea on the synoptic and dynamical structure of 20 selective systems associated with heavy rainfall in the southern shores of Caspian Sea are investigated. These systems are selected based on the rainfall data gathered by three local stations named: Rasht, Babolsar and Gorgan in different seasons during a five-year period (2005-2010) with maximum amount of rainfall through the 24 hours of a day. In addition to synoptic analyses of these systems, the WRF model with and without surface flues was run using the two nested grids with the horizontal resolutions of 12 and 36 km. The results show that there are good consistencies between the predicted distribution of rainfall field, time of beginning and end of rainfall by the model and the observations. But the model underestimates the amounts of rainfall and the maximum difference with the observation is about 69%. Also, no significant changes in the results are seen when the domain and the resolution of computations are changed. The other noticeable point is that the systems are severely weakened by removing heat and moisture fluxes and thereby the amounts of large scale rainfall are decreased up to 77% and the convective rainfalls tend to zero.
Resumo:
The origin of observed ultra-high energy cosmic rays (UHECRs, energies in excess of $10^{18.5}$ eV) remains unknown, as extragalactic magnetic fields deflect these charged particles from their true origin. Interactions of these UHECRs at their source would invariably produce high energy neutrinos. As these neutrinos are chargeless and nearly massless, their propagation through the universe is unimpeded and their detection can be correlated with the origin of UHECRs. Gamma-ray bursts (GRBs) are one of the few possible origins for UHECRs, observed as short, immensely bright outbursts of gamma-rays at cosmological distances. The energy density of GRBs in the universe is capable of explaining the measured UHECR flux, making them promising UHECR sources. Interactions between UHECRs and the prompt gamma-ray emission of a GRB would produce neutrinos that would be detected in coincidence with the GRB’s gamma-ray emission. The IceCube Neutrino Observatory can be used to search for these neutrinos in coincidence with GRBs, detecting neutrinos through the Cherenkov radiation emitted by secondary charged particles produced in neutrino interactions in the South Pole glacial ice. Restricting these searches to be in coincidence with GRB gamma-ray emis- sion, analyses can be performed with very little atmospheric background. Previous searches have focused on detecting muon tracks from muon neutrino interactions fromthe Northern Hemisphere, where the Earth shields IceCube’s primary background of atmospheric muons, or spherical cascade events from neutrinos of all flavors from the entire sky, with no compelling neutrino signal found. Neutrino searches from GRBs with IceCube have been extended to a search for muon tracks in the Southern Hemisphere in coincidence with 664 GRBs over five years of IceCube data in this dissertation. Though this region of the sky contains IceCube’s primary background of atmospheric muons, it is also where IceCube is most sensitive to neutrinos at the very highest energies as Earth absorption in the Northern Hemisphere becomes relevant. As previous neutrino searches have strongly constrained neutrino production in GRBs, a new per-GRB analysis is introduced for the first time to discover neutrinos in coincidence with possibly rare neutrino-bright GRBs. A stacked analysis is also performed to discover a weak neutrino signal distributed over many GRBs. Results of this search are found to be consistent with atmospheric muon backgrounds. Combining this result with previously published searches for muon neutrino tracks in the Northern Hemisphere, cascade event searches over the entire sky, and an extension of the Northern Hemisphere track search in three additional years of IceCube data that is consistent with atmospheric backgrounds, the most stringent limits yet can be placed on prompt neutrino production in GRBs, which increasingly disfavor GRBs as primary sources of UHECRs in current GRB models.
Resumo:
While fault-tolerant quantum computation might still be years away, analog quantum simulators offer a way to leverage current quantum technologies to study classically intractable quantum systems. Cutting edge quantum simulators such as those utilizing ultracold atoms are beginning to study physics which surpass what is classically tractable. As the system sizes of these quantum simulators increase, there are also concurrent gains in the complexity and types of Hamiltonians which can be simulated. In this work, I describe advances toward the realization of an adaptable, tunable quantum simulator capable of surpassing classical computation. We simulate long-ranged Ising and XY spin models which can have global arbitrary transverse and longitudinal fields in addition to individual transverse fields using a linear chain of up to 24 Yb+ 171 ions confined in a linear rf Paul trap. Each qubit is encoded in the ground state hyperfine levels of an ion. Spin-spin interactions are engineered by the application of spin-dependent forces from laser fields, coupling spin to motion. Each spin can be read independently using state-dependent fluorescence. The results here add yet more tools to an ever growing quantum simulation toolbox. One of many challenges has been the coherent manipulation of individual qubits. By using a surprisingly large fourth-order Stark shifts in a clock-state qubit, we demonstrate an ability to individually manipulate spins and apply independent Hamiltonian terms, greatly increasing the range of quantum simulations which can be implemented. As quantum systems grow beyond the capability of classical numerics, a constant question is how to verify a quantum simulation. Here, I present measurements which may provide useful metrics for large system sizes and demonstrate them in a system of up to 24 ions during a classically intractable simulation. The observed values are consistent with extremely large entangled states, as much as ~95% of the system entangled. Finally, we use many of these techniques in order to generate a spin Hamiltonian which fails to thermalize during experimental time scales due to a meta-stable state which is often called prethermal. The observed prethermal state is a new form of prethermalization which arises due to long-range interactions and open boundary conditions, even in the thermodynamic limit. This prethermalization is observed in a system of up to 22 spins. We expect that system sizes can be extended up to 30 spins with only minor upgrades to the current apparatus. These results emphasize that as the technology improves, the techniques and tools developed here can potentially be used to perform simulations which will surpass the capability of even the most sophisticated classical techniques, enabling the study of a whole new regime of quantum many-body physics.
Resumo:
My dissertation defends a positive answer to the question: “Can a videogame be a work of art? ” To achieve this goal I develop definitions of several concepts, primarily ‘art’, ‘games’, and ‘videogames’, and offer arguments about the compatibility of these notions. In Part One, I defend a definition of art from amongst several contemporary and historical accounts. This definition, the Intentional-Historical account, requires, among other things, that an artwork have the right kind of creative intentions behind it, in short that the work be intended to be regarded in a particular manner. This is a leading account that has faced several recent objections that I address, particular the buck-passing theory, the objection against non-failure theories of art, and the simultaneous creation response to the ur-art problem, while arguing that it is superior to other theories in its ability to answer the question of videogames’ art status. Part Two examines whether games can exhibit the art-making kind of creative intention. Recent literature has suggested that they can. To verify this a definition of games is needed. I review and develop the most promising account of games in the literature, the over-looked account from Bernard Suits. I propose and defend a modified version of this definition against other accounts. Interestingly, this account entails that games cannot be successfully intended to be works of art because games are goal-directed activities that require a voluntary selection of inefficient means and that is incompatible with the proper manner of regarding that is necessary for something to be an artwork. While the conclusions of Part One and Part Two may appear to suggest that videogames cannot be works of art, Part Three proposes and defends a new account of videogames that, contrary to first appearances, implies that not all videogames are games. This Intentional-Historical Formalist account allows for non-game videogames to be created with an art-making intention, though not every non-ludic videogame will have an art-making intention behind it. I then discuss examples of videogames that are good candidates for being works of art. I conclude that a videogame can be a work of art, but that not all videogames are works of art. The thesis is significant in several respects. It is a continuation of academic work that has focused on the definition and art status of videogames. It clarifies the current debate and provides a positive account of the central issues that has so far been lacking. It also defines videogames in a way that corresponds better with the actual practice of videogame making and playing than other definitions in the literature. It offers further evidence in defense of certain theories of art over others, providing a close examination of videogames as a new case study for potential art objects and for aesthetic and artistic theory in general. Finally, it provides a compelling answer to the question of whether videogames can be art. This project also provides the groundwork for new evaluative, critical, and appreciative tools for engagement with videogames as they develop as a medium. As videogames mature, more people, both inside and outside academia, have increasing interest in what they are and how to understand them. One place many have looked is to the practice of art appreciation. My project helps make sense of which appreciative and art-critical tools and methods are applicable to videogames.