999 resultados para LUMINOSITY
Resumo:
Suite à la découverte d’environ 2000 naines brunes au cours des deux dernières décennies, on commence à bien comprendre la physique de ces objets de masse intermédiaire entre les étoiles et les planètes. Malgré tout, les modèles d’atmosphère et d’évolution de ces objets peu massifs peinent toujours à reproduire fidèlement leurs caractéristiques pour les âges les plus jeunes. Ce travail propose la caractérisation de quatre compagnons de masse sous-stellaire (8-30 MJup) en orbite à grande séparation (300-900 UA) autour d'étoiles jeunes (5 Ma) de la région de formation Upper Scorpius. De nouveaux spectres (0,9-2,5 um) et de nouvelles mesures photométriques (YJHKsL') sont présentés et analysés, dans le but de déterminer la masse, température effective, luminosité et gravité de surface de ces compagnons, tout en évaluant la fidélité avec laquelle les spectres synthétiques tirés de deux modèles d’atmosphère récents reproduisent les spectres observés.
Resumo:
La thèse qui suit est organisée en deux volets: un premier volet portant sur les modèles de masse de galaxies et un second volet sur la conception de revêtements optiques et le contrôle de leurs propriétés mécaniques. Les modèles de masse présentés dans cette thèse ont été réalisés sur un sous-échantillon de dix galaxies de l'étude SINGS comprenant neuf galaxies normales et une galaxie naine. Ce travail visait à fixer le rapport masse-luminosité du disque à tout rayon en utilisant les résultats de modèles d'évolution galactique chimio-spectrophotométriques ajustés spécifiquement à chaque galaxie grâce à son profil de photométrie multi-bandes. Les résultats montrent que les disques stellaires tels que normalisés par les rapports masse-luminosité issus des modèles ont des masses cohérentes dans toutes les bandes étudiées de l'ultra-violet, du visible ainsi que du proche infrarouge (bandes FUV à IRAC2). Ces disques peuvent être considérés comme maximaux par rapport aux données cinématiques des galaxies étudiées. Ceci est dû au fait que le rapport M/L est plus élevé au centre que sur les bords. Les disques étant maximaux et physiquement justifiés, on ne peut dès lors ignorer les effets de composants tels que les bulbes ou les barres et les corrections nécessaires doivent être apportées aux profils de luminosité et de vitesses de rotation de la galaxie. Dans les travaux de la seconde partie, le logiciel en développement libre OpenFilters a été modifié afin de tenir compte des contraintes mécaniques dans la conception numérique de revêtements optiques. Les contraintes mécaniques dans les couches minces ont un effet délétère sur leurs performances optiques. Un revêtement destiné à rendre réflectives les lames d'un étalon Fabry-Perot utilisé en astronomie a été conçu et fabriqué afin d'évaluer les performances réelles de la méthode. Ce cas a été choisi à cause de la diminution de la finesse d'un étalon Fabry-Perot apporté par la courbure des lames sous l'effet des contraintes. Les résultats montrent que les mesures concordent avec les modèles numériques et qu'il est donc possible à l'aide de ce logiciel d'optimiser les revêtements pour leur comportement mécanique autant que pour leurs propriétés optiques.
Resumo:
L'outil développé dans le cadre de cette thèse est disponible à l'adresse suivante: www.astro.umontreal.ca/~malo/banyan.php
Resumo:
Les étoiles naines blanches représentent la fin de l’évolution de 97% des étoiles de notre galaxie, dont notre Soleil. L’étude des propriétés globales de ces étoiles (distribution en température, distribution de masse, fonction de luminosité, etc.) requiert l’élaboration d’ensembles statistiquement complets et bien définis. Bien que plusieurs relevés d’étoiles naines blanches existent dans la littérature, la plupart de ceux-ci souffrent de biais statistiques importants pour ce genre d’analyse. L’échantillon le plus représentatif de la population d’étoiles naines blanches demeure à ce jour celui défini dans un volume complet, restreint à l’environnement immédiat du Soleil, soit à une distance de 20 pc (∼ 65 années-lumière) de celui-ci. Malheureusement, comme les naines blanches sont des étoiles intrinsèquement peu lumineuses, cet échantillon ne contient que ∼ 130 objets, compromettant ainsi toute étude statistique significative. Le but de notre étude est de recenser la population d’étoiles naines blanches dans le voisinage solaire a une distance de 40 pc, soit un volume huit fois plus grand. Nous avons ainsi entrepris de répertorier toutes les étoiles naines blanches à moins de 40 pc du Soleil à partir de SUPERBLINK, un vaste catalogue contenant le mouvement propre et les données photométriques de plus de 2 millions d’étoiles. Notre approche est basée sur la méthode des mouvements propres réduits qui permet d’isoler les étoiles naines blanches des autres populations stellaires. Les distances de toutes les candidates naines blanches sont estimées à l’aide de relations couleur-magnitude théoriques afin d’identifier les objets se situant à moins de 40 pc du Soleil, dans l’hémisphère nord. La confirmation spectroscopique du statut de naine blanche de nos ∼ 1100 candidates a ensuite requis 15 missions d’observations astronomiques sur trois grands télescopes à Kitt Peak en Arizona, ainsi qu’une soixantaine d’heures allouées sur les télescopes de 8 m des observatoires Gemini Nord et Sud. Nous avons ainsi découvert 322 nouvelles étoiles naines blanches de plusieurs types spectraux différents, dont 173 sont à moins de 40 pc, soit une augmentation de 40% du nombre de naines blanches connues à l’intérieur de ce volume. Parmi ces nouvelles naines blanches, 4 se trouvent probablement à moins de 20 pc du Soleil. De plus, nous démontrons que notre technique est très efficace pour identifier les étoiles naines blanches dans la région peuplée du plan de la Galaxie. Nous présentons ensuite une analyse spectroscopique et photométrique détaillée de notre échantillon à l’aide de modèles d’atmosphère afin de déterminer les propriétés physiques de ces étoiles, notamment la température, la gravité de surface et la composition chimique. Notre analyse statistique de ces propriétés, basée sur un échantillon presque trois fois plus grand que celui à 20 pc, révèle que nous avons identifié avec succès les étoiles les plus massives, et donc les moins lumineuses, de cette population qui sont souvent absentes de la plupart des relevés publiés. Nous avons également identifié plusieurs naines blanches très froides, et donc potentiellement très vieilles, qui nous permettent de mieux définir le côté froid de la fonction de luminosité, et éventuellement l’âge du disque de la Galaxie. Finalement, nous avons aussi découvert plusieurs objets d’intérêt astrophysique, dont deux nouvelles étoiles naines blanches variables de type ZZ Ceti, plusieurs naines blanches magnétiques, ainsi que de nombreux systèmes binaires non résolus.
Resumo:
Latex a été utilisé pour la redaction de cette thèse.
Resumo:
Medipix2 (MPX) sont des détecteurs semi-conducteurs au silicium montés sur 256x256 pixels. Chaque pixel a une aire de 55x55μm2. L’aire active d’un détecteur MPX est d’environ 2 cm2. Avec deux modes de détection, un seuil et un temps d’exposition ajustables, leur utilisation peut être optimisée pour une analyse spécifique. Seize de ces détecteurs sont présentement installés dans l’expérience ATLAS (A Toroidal LHC ApparatuS) au CERN (Organisation Européenne pour la Recherche Nucléaire). Ils mesurent en temps réel le champ de radiation dû aux collisions proton-proton, au point d’interaction IP1 (Point d’Interaction 1) du LHC (Grand Collisionneur d’Hadrons). Ces mesures ont divers buts comme par exemple la mesure du champ de neutrons dans la caverne d’ATLAS. Le réseau de détecteurs MPX est complètement indépendant du détecteur ATLAS. Le groupe ATLAS-Montréal s’est intéressé à l’analyse des données récoltées par ces détecteurs pour calculer une valeur de la luminosité du LHC au point de collision des faisceaux, autour duquel est construit le détecteur ATLAS. Cette valeur est déterminée indépendamment de la luminosité mesurée par les divers sous-détecteurs d’ATLAS dédiés spécifiquement à la mesure de la luminosité. Avec l’augmentation de la luminosité du LHC les détecteurs MPX les plus proches du point d’interaction détectent un grand nombre de particules dont les traces sont impossibles à distinguer sur les images ("frames") obtenues, à cause de leur recouvrement. Les paramètres de mesure de certains de ces détecteurs ont été optimisés pour des mesures de luminosité. Une méthode d’analyse des données permet de filtrer les pixels bruyants et de convertir les données des images, qui correspondent à des temps d’exposition propres aux détecteurs MPX, en valeur de luminosité pour chaque LumiBlock. Un LumiBlock est un intervalle de temps de mesure propre au détecteur ATLAS. On a validé les mesures de luminosité premièrement en comparant les résultats obtenus par différents détecteurs MPX, et ensuite en comparant les valeurs de luminosité relevées à celles obtenues par les sous-détecteurs d’ATLAS dédiés spécifiquement à la mesure de la luminosité.
Resumo:
Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.
Resumo:
We present the complete next-to-leading order QCD corrections to the polarized hadroproduction of heavy flavors. This reaction can be studied experimentally in polarized pp collisions at the JHF and at the BNL RHIC in order to constrain the polarized gluon density. It is demonstrated that the dependence on the unphysical renormalization and factorization scales is strongly reduced beyond the leading order. We also discuss how the high luminosity at the JHF can be used to control remaining theoretical uncertainties. An effective method for bridging the gap between theoretical predictions for heavy quarks and experimental measurements of heavy meson decay products is introduced briefly.
Resumo:
The coupled climate dynamics underlying large, rapid, and potentially irreversible changes in ice cover are studied. A global atmosphere–ocean–sea ice general circulation model with idealized aquaplanet geometry is forced by gradual multi-millennial variations in solar luminosity. The model traverses a hysteresis loop between warm ice-free conditions and cold glacial conditions in response to ±5 W m−2 variations in global, annual-mean insolation. Comparison of several model configurations confirms the importance of polar ocean processes in setting the sensitivity and time scales of the transitions. A “sawtooth” character is found with faster warming and slower cooling, reflecting the opposing effects of surface heating and cooling on upper-ocean buoyancy and, thus, effective heat capacity. The transition from a glacial to warm, equable climate occurs in about 200 years. In contrast to the “freshwater hosing” scenario, transitions are driven by radiative forcing and sea ice feedbacks. The ocean circulation, and notably the meridional overturning circulation (MOC), does not drive the climate change. The MOC (and associated heat transport) collapses poleward of the advancing ice edge, but this is a purely passive response to cooling and ice expansion. The MOC does, however, play a key role in setting the time scales of the transition and contributes to the asymmetry between warming and cooling.
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
Giant planets helped to shape the conditions we see in the Solar System today and they account for more than 99% of the mass of the Sun’s planetary system. They can be subdivided into the Ice Giants (Uranus and Neptune) and the Gas Giants (Jupiter and Saturn), which differ from each other in a number of fundamental ways. Uranus, in particular is the most challenging to our understanding of planetary formation and evolution, with its large obliquity, low self-luminosity, highly asymmetrical internal field, and puzzling internal structure. Uranus also has a rich planetary system consisting of a system of inner natural satellites and complex ring system, five major natural icy satellites, a system of irregular moons with varied dynamical histories, and a highly asymmetrical magnetosphere. Voyager 2 is the only spacecraft to have explored Uranus, with a flyby in 1986, and no mission is currently planned to this enigmatic system. However, a mission to the uranian system would open a new window on the origin and evolution of the Solar System and would provide crucial information on a wide variety of physicochemical processes in our Solar System. These have clear implications for understanding exoplanetary systems. In this paper we describe the science case for an orbital mission to Uranus with an atmospheric entry probe to sample the composition and atmospheric physics in Uranus’ atmosphere. The characteristics of such an orbiter and a strawman scientific payload are described and we discuss the technical challenges for such a mission. This paper is based on a white paper submitted to the European Space Agency’s call for science themes for its large-class mission programme in 2013.
Resumo:
We report observations of the cusp/cleft ionosphere made on December 16th 1998 by the EISCAT (European incoherent scatter) VHF radar at Tromso and the EISCAT Svalbard radar (ESR). We compare them with observations of the dayside auroral luminosity, as seen by meridian scanning photometers at Ny Alesund and of HF radar backscatter, as observed by the CUTLASS radar. We study the response to an interval of about one hour when the interplanetary magnetic field (IMF), monitored by the WIND and ACE spacecraft, was southward. The cusp/cleft aurora is shown to correspond to a spatially extended region of elevated electron temperatures in the VHF radar data. Initial conditions were characterised by a northward-directed IMF and cusp/cleft aurora poleward of the ESR. A strong southward turning then occurred, causing an equatorward motion of the cusp/cleft aurora. Within the equatorward expanding, southward-IMF cusp/cleft, the ESR observed structured and elevated plasma densities and ion and electron temperatures. Cleft ion fountain upflows were seen in association with elevated ion temperatures and rapid eastward convection, consistent with the magnetic curvature force on newly opened held lines for the observed negative IMF B-y. Subsequently, the ESR beam remained immediately poleward of the main cusp/cleft and a sequence of poleward-moving auroral transients passed over it. After the last of these, the ESR was in the polar cap and the radar observations were characterised by extremely low ionospheric densities and downward field-aligned flows. The IMF then turned northward again and the auroral oval contracted such that the ESR moved back into the cusp/cleft region. For the poleward-retreating northward-IMF cusp/cleft, the convection flows were slower, upflows were weaker and the electron density and temperature enhancements were less structured. Following the northward turning, the bands of high electron temperature and cusp/cleft aurora bifurcated, consistent with both subsolar and lobe reconnection taking place simultaneously. The present paper describes the large-scale behaviour of the ionosphere during this interval, as observed by a powerful combination of instruments. Two companion papers, by Lockwood et al. (2000) and Thorolfsson et al. (2000), both in this issue, describe the detailed behaviour of the poleward-moving transients observed during the interval of southward B-z, and explain their morphology in the context of previous theoretical work.
Resumo:
We report high-resolution observations of the southward-IMF cusp/cleft ionosphere made on December 16th 1998 by the EISCAT (European incoherent scatter) Svalbard radar (ESR), and compare them with observations of dayside auroral luminosity, as seen at a wavelength of 630 nm by a meridian scanning photometer at Ny Alesund, and of plasma flows, as seen by the CUTLASS (co-operative UK twin location auroral sounding system) Finland HF radar. The optical data reveal a series of poleward-moving transient red-line (630 nm) enhancements, events that have been associated with bursts in the rate of magnetopause reconnection generating new open flux. The combined observations at this time have strong similarities to predictions of the effects of soft electron precipitation modulated by pulsed reconnection, as made by Davis and Lockwood (1996); however, the effects of rapid zonal flow in the ionosphere, caused by the magnetic curvature force on the newly opened field lines, are found to be a significant additional factor. In particular, it is shown how enhanced plasma loss rates induced by the rapid convection can explain two outstanding anomalies of the 630 nm transients, namely how minima in luminosity form between the poleward-moving events and how events can re-brighten as they move poleward. The observations show how cusp/cleft aurora and transient poleward-moving auroral forms appear in the ESR data and the conditions which cause enhanced 630 nm emission in the transients: they are an important first step in enabling the ESR to identify these features away from the winter solstice when supporting auroral observations are not available.
Resumo:
The solar wind is an extended ionized gas of very high electrical conductivity, and therefore drags some magnetic flux out of the Sun to fill the heliosphere with a weak interplanetary magnetic field(1,2). Magnetic reconnection-the merging of oppositely directed magnetic fields-between the interplanetary field and the Earth's magnetic field allows energy from the solar wind to enter the near-Earth environment. The Sun's properties, such as its luminosity, are related to its magnetic field, although the connections are still not well understood(3,4). Moreover, changes in the heliospheric magnetic field have been linked with changes in total cloud cover over the Earth, which may influence global climate(5), Here we show that measurements of the near-Earth interplanetary magnetic field reveal that the total magnetic flux leaving the Sun has risen by a factor of 1.4 since 1964: surrogate measurements of the interplanetary magnetic field indicate that the increase since 1901 has been by a factor of 2,3, This increase may be related to chaotic changes in the dynamo that generates the solar magnetic field. We do not yet know quantitatively how such changes will influence the global environment.
Resumo:
Ground-based observations of dayside auroral forms and magnetic perturbations in the arctic sectors of Svalbard and Greenland, in combination with the high-resolution measurements of ionospheric ion drift and temperature by the EISCAT radar, are used to study temporal/spatial structures of cusp-type auroral forms in relation to convection. Large-scale patterns of equivalent convection in the dayside polar ionosphere are derived from the magnetic observations in Greenland and Svalbard. This information is used to estimate the ionospheric convection pattern in the vicinity of the cusp/cleft aurora. The reported observations, covering the period 0700-1130 UT, on January 11, 1993, are separated into four intervals according to the observed characteristics of the aurora and ionospheric convection. The morphology and intensity of the aurora are very different in quiet and disturbed intervals. A latitudinally narrow zone of intense and dynamical 630.0 nm emission equatorward of 75 degrees MLAT, was observed during periods of enhanced antisunward convection in the cusp region. This (type 1 cusp aurora) is considered to be the signature of plasma entry via magnetopause reconnection at low magnetopause latitudes, i.e. the low-latitude boundary layer (LLB I,). Another zone of weak 630.0 nm emission (type 2 cusp aurora) was observed to extend up to high latitudes (similar to 79 degrees MLAT) during relatively quiet magnetic conditions, when indications of reverse (sunward) convection was observed in the dayside polar cap. This is postulated to be a signature of merging between a northward directed IMF (B-z > 0) and the geomagnetic field poleward of the cusp. The coexistence of type 1 and 2 auroras was observed under intermediate circumstances. The optical observations from Svalbard and Greenland were also used to determine the temporal and spatial evolution of type 1 auroral forms, i.e. poleward-moving auroral events occurring in the vicinity of a rotational convection reversal in the early post-noon sector. Each event appeared as a local brightening at the equatorward boundary of the pre-existing type 1 cusp aurora, followed by poleward and eastward expansions of luminosity. The auroral events were associated with poleward-moving surges of enhanced ionospheric convection and F-layer ion temperature as observed by the EISCAT radar in Tromso. The EISCAT ion flow data in combination with the auroral observations show strong evidence for plasma flow across the open/closed field line boundary.