998 resultados para OBSERVATIONS COSMOLOGICAL INTERPRETATION


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract : In the subject of fingerprints, the rise of computers tools made it possible to create powerful automated search algorithms. These algorithms allow, inter alia, to compare a fingermark to a fingerprint database and therefore to establish a link between the mark and a known source. With the growth of the capacities of these systems and of data storage, as well as increasing collaboration between police services on the international level, the size of these databases increases. The current challenge for the field of fingerprint identification consists of the growth of these databases, which makes it possible to find impressions that are very similar but coming from distinct fingers. However and simultaneously, this data and these systems allow a description of the variability between different impressions from a same finger and between impressions from different fingers. This statistical description of the withinand between-finger variabilities computed on the basis of minutiae and their relative positions can then be utilized in a statistical approach to interpretation. The computation of a likelihood ratio, employing simultaneously the comparison between the mark and the print of the case, the within-variability of the suspects' finger and the between-variability of the mark with respect to a database, can then be based on representative data. Thus, these data allow an evaluation which may be more detailed than that obtained by the application of rules established long before the advent of these large databases or by the specialists experience. The goal of the present thesis is to evaluate likelihood ratios, computed based on the scores of an automated fingerprint identification system when the source of the tested and compared marks is known. These ratios must support the hypothesis which it is known to be true. Moreover, they should support this hypothesis more and more strongly with the addition of information in the form of additional minutiae. For the modeling of within- and between-variability, the necessary data were defined, and acquired for one finger of a first donor, and two fingers of a second donor. The database used for between-variability includes approximately 600000 inked prints. The minimal number of observations necessary for a robust estimation was determined for the two distributions used. Factors which influence these distributions were also analyzed: the number of minutiae included in the configuration and the configuration as such for both distributions, as well as the finger number and the general pattern for between-variability, and the orientation of the minutiae for within-variability. In the present study, the only factor for which no influence has been shown is the orientation of minutiae The results show that the likelihood ratios resulting from the use of the scores of an AFIS can be used for evaluation. Relatively low rates of likelihood ratios supporting the hypothesis known to be false have been obtained. The maximum rate of likelihood ratios supporting the hypothesis that the two impressions were left by the same finger when the impressions came from different fingers obtained is of 5.2 %, for a configuration of 6 minutiae. When a 7th then an 8th minutia are added, this rate lowers to 3.2 %, then to 0.8 %. In parallel, for these same configurations, the likelihood ratios obtained are on average of the order of 100,1000, and 10000 for 6,7 and 8 minutiae when the two impressions come from the same finger. These likelihood ratios can therefore be an important aid for decision making. Both positive evolutions linked to the addition of minutiae (a drop in the rates of likelihood ratios which can lead to an erroneous decision and an increase in the value of the likelihood ratio) were observed in a systematic way within the framework of the study. Approximations based on 3 scores for within-variability and on 10 scores for between-variability were found, and showed satisfactory results. Résumé : Dans le domaine des empreintes digitales, l'essor des outils informatisés a permis de créer de puissants algorithmes de recherche automatique. Ces algorithmes permettent, entre autres, de comparer une trace à une banque de données d'empreintes digitales de source connue. Ainsi, le lien entre la trace et l'une de ces sources peut être établi. Avec la croissance des capacités de ces systèmes, des potentiels de stockage de données, ainsi qu'avec une collaboration accrue au niveau international entre les services de police, la taille des banques de données augmente. Le défi actuel pour le domaine de l'identification par empreintes digitales consiste en la croissance de ces banques de données, qui peut permettre de trouver des impressions très similaires mais provenant de doigts distincts. Toutefois et simultanément, ces données et ces systèmes permettent une description des variabilités entre différentes appositions d'un même doigt, et entre les appositions de différents doigts, basées sur des larges quantités de données. Cette description statistique de l'intra- et de l'intervariabilité calculée à partir des minuties et de leurs positions relatives va s'insérer dans une approche d'interprétation probabiliste. Le calcul d'un rapport de vraisemblance, qui fait intervenir simultanément la comparaison entre la trace et l'empreinte du cas, ainsi que l'intravariabilité du doigt du suspect et l'intervariabilité de la trace par rapport à une banque de données, peut alors se baser sur des jeux de données représentatifs. Ainsi, ces données permettent d'aboutir à une évaluation beaucoup plus fine que celle obtenue par l'application de règles établies bien avant l'avènement de ces grandes banques ou par la seule expérience du spécialiste. L'objectif de la présente thèse est d'évaluer des rapports de vraisemblance calcul és à partir des scores d'un système automatique lorsqu'on connaît la source des traces testées et comparées. Ces rapports doivent soutenir l'hypothèse dont il est connu qu'elle est vraie. De plus, ils devraient soutenir de plus en plus fortement cette hypothèse avec l'ajout d'information sous la forme de minuties additionnelles. Pour la modélisation de l'intra- et l'intervariabilité, les données nécessaires ont été définies, et acquises pour un doigt d'un premier donneur, et deux doigts d'un second donneur. La banque de données utilisée pour l'intervariabilité inclut environ 600000 empreintes encrées. Le nombre minimal d'observations nécessaire pour une estimation robuste a été déterminé pour les deux distributions utilisées. Des facteurs qui influencent ces distributions ont, par la suite, été analysés: le nombre de minuties inclus dans la configuration et la configuration en tant que telle pour les deux distributions, ainsi que le numéro du doigt et le dessin général pour l'intervariabilité, et la orientation des minuties pour l'intravariabilité. Parmi tous ces facteurs, l'orientation des minuties est le seul dont une influence n'a pas été démontrée dans la présente étude. Les résultats montrent que les rapports de vraisemblance issus de l'utilisation des scores de l'AFIS peuvent être utilisés à des fins évaluatifs. Des taux de rapports de vraisemblance relativement bas soutiennent l'hypothèse que l'on sait fausse. Le taux maximal de rapports de vraisemblance soutenant l'hypothèse que les deux impressions aient été laissées par le même doigt alors qu'en réalité les impressions viennent de doigts différents obtenu est de 5.2%, pour une configuration de 6 minuties. Lorsqu'une 7ème puis une 8ème minutie sont ajoutées, ce taux baisse d'abord à 3.2%, puis à 0.8%. Parallèlement, pour ces mêmes configurations, les rapports de vraisemblance sont en moyenne de l'ordre de 100, 1000, et 10000 pour 6, 7 et 8 minuties lorsque les deux impressions proviennent du même doigt. Ces rapports de vraisemblance peuvent donc apporter un soutien important à la prise de décision. Les deux évolutions positives liées à l'ajout de minuties (baisse des taux qui peuvent amener à une décision erronée et augmentation de la valeur du rapport de vraisemblance) ont été observées de façon systématique dans le cadre de l'étude. Des approximations basées sur 3 scores pour l'intravariabilité et sur 10 scores pour l'intervariabilité ont été trouvées, et ont montré des résultats satisfaisants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le travail d'un(e) expert(e) en science forensique exige que ce dernier (cette dernière) prenne une série de décisions. Ces décisions sont difficiles parce qu'elles doivent être prises dans l'inévitable présence d'incertitude, dans le contexte unique des circonstances qui entourent la décision, et, parfois, parce qu'elles sont complexes suite à de nombreuse variables aléatoires et dépendantes les unes des autres. Etant donné que ces décisions peuvent aboutir à des conséquences sérieuses dans l'administration de la justice, la prise de décisions en science forensique devrait être soutenue par un cadre robuste qui fait des inférences en présence d'incertitudes et des décisions sur la base de ces inférences. L'objectif de cette thèse est de répondre à ce besoin en présentant un cadre théorique pour faire des choix rationnels dans des problèmes de décisions rencontrés par les experts dans un laboratoire de science forensique. L'inférence et la théorie de la décision bayésienne satisfont les conditions nécessaires pour un tel cadre théorique. Pour atteindre son objectif, cette thèse consiste de trois propositions, recommandant l'utilisation (1) de la théorie de la décision, (2) des réseaux bayésiens, et (3) des réseaux bayésiens de décision pour gérer des problèmes d'inférence et de décision forensiques. Les résultats présentent un cadre uniforme et cohérent pour faire des inférences et des décisions en science forensique qui utilise les concepts théoriques ci-dessus. Ils décrivent comment organiser chaque type de problème en le décomposant dans ses différents éléments, et comment trouver le meilleur plan d'action en faisant la distinction entre des problèmes de décision en une étape et des problèmes de décision en deux étapes et en y appliquant le principe de la maximisation de l'utilité espérée. Pour illustrer l'application de ce cadre à des problèmes rencontrés par les experts dans un laboratoire de science forensique, des études de cas théoriques appliquent la théorie de la décision, les réseaux bayésiens et les réseaux bayésiens de décision à une sélection de différents types de problèmes d'inférence et de décision impliquant différentes catégories de traces. Deux études du problème des deux traces illustrent comment la construction de réseaux bayésiens permet de gérer des problèmes d'inférence complexes, et ainsi surmonter l'obstacle de la complexité qui peut être présent dans des problèmes de décision. Trois études-une sur ce qu'il faut conclure d'une recherche dans une banque de données qui fournit exactement une correspondance, une sur quel génotype il faut rechercher dans une banque de données sur la base des observations faites sur des résultats de profilage d'ADN, et une sur s'il faut soumettre une trace digitale à un processus qui compare la trace avec des empreintes de sources potentielles-expliquent l'application de la théorie de la décision et des réseaux bayésiens de décision à chacune de ces décisions. Les résultats des études des cas théoriques soutiennent les trois propositions avancées dans cette thèse. Ainsi, cette thèse présente un cadre uniforme pour organiser et trouver le plan d'action le plus rationnel dans des problèmes de décisions rencontrés par les experts dans un laboratoire de science forensique. Le cadre proposé est un outil interactif et exploratoire qui permet de mieux comprendre un problème de décision afin que cette compréhension puisse aboutir à des choix qui sont mieux informés. - Forensic science casework involves making a sériés of choices. The difficulty in making these choices lies in the inévitable presence of uncertainty, the unique context of circumstances surrounding each décision and, in some cases, the complexity due to numerous, interrelated random variables. Given that these décisions can lead to serious conséquences in the admin-istration of justice, forensic décision making should be supported by a robust framework that makes inferences under uncertainty and décisions based on these inferences. The objective of this thesis is to respond to this need by presenting a framework for making rational choices in décision problems encountered by scientists in forensic science laboratories. Bayesian inference and décision theory meets the requirements for such a framework. To attain its objective, this thesis consists of three propositions, advocating the use of (1) décision theory, (2) Bayesian networks, and (3) influence diagrams for handling forensic inference and décision problems. The results present a uniform and coherent framework for making inferences and décisions in forensic science using the above theoretical concepts. They describe how to organize each type of problem by breaking it down into its différent elements, and how to find the most rational course of action by distinguishing between one-stage and two-stage décision problems and applying the principle of expected utility maximization. To illustrate the framework's application to the problems encountered by scientists in forensic science laboratories, theoretical case studies apply décision theory, Bayesian net-works and influence diagrams to a selection of différent types of inference and décision problems dealing with différent catégories of trace evidence. Two studies of the two-trace problem illustrate how the construction of Bayesian networks can handle complex inference problems, and thus overcome the hurdle of complexity that can be present in décision prob-lems. Three studies-one on what to conclude when a database search provides exactly one hit, one on what genotype to search for in a database based on the observations made on DNA typing results, and one on whether to submit a fingermark to the process of comparing it with prints of its potential sources-explain the application of décision theory and influ¬ence diagrams to each of these décisions. The results of the theoretical case studies support the thesis's three propositions. Hence, this thesis présents a uniform framework for organizing and finding the most rational course of action in décision problems encountered by scientists in forensic science laboratories. The proposed framework is an interactive and exploratory tool for better understanding a décision problem so that this understanding may lead to better informed choices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The value of forensic results crucially depends on the propositions and the information under which they are evaluated. For example, if a full single DNA profile for a contemporary marker system matching the profile of Mr A is assessed, given the propositions that the DNA came from Mr A and given it came from an unknown person, the strength of evidence can be overwhelming (e.g., in the order of a billion). In contrast, if we assess the same result given that the DNA came from Mr A and given it came from his twin brother (i.e., a person with the same DNA profile), the strength of evidence will be 1, and therefore neutral, unhelpful and irrelevant 1 to the case at hand. While this understanding is probably uncontroversial and obvious to most, if not all practitioners dealing with DNA evidence, the practical precept of not specifying an alternative source with the same characteristics as the one considered under the first proposition may be much less clear in other circumstances. During discussions with colleagues and trainees, cases have come to our attention where forensic scientists have difficulty with the formulation of propositions. It is particularly common to observe that results (e.g., observations) are included in the propositions, whereas-as argued throughout this note-they should not be. A typical example could be a case where a shoe-mark with a logo and the general pattern characteristics of a Nike Air Jordan shoe is found at the scene of a crime. A Nike Air Jordan shoe is then seized at Mr A's house and control prints of this shoe compared to the mark. The results (e.g., a trace with this general pattern and acquired characteristics corresponding to the sole of Mr A's shoe) are then evaluated given the propositions 'The mark was left by Mr A's Nike Air Jordan shoe-sole' and 'The mark was left by an unknown Nike Air Jordan shoe'. As a consequence, the footwear examiner will not evaluate part of the observations (i.e., the mark presents the general pattern of a Nike Air Jordan) whereas they can be highly informative. Such examples can be found in all forensic disciplines. In this article, we present a few such examples and discuss aspects that will help forensic scientists with the formulation of propositions. In particular, we emphasise on the usefulness of notation to distinguish results that forensic scientists should evaluate from case information that the Court will evaluate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preeclampsia is the main cause of maternal mortality and is associated with a five-fold increase in perinatal mortality in developing countries. In spite of this, the etiology of preeclampsia is unknown. The present article analyzes the contradictory results of the use of calcium supplementation in the prevention of preeclampsia, and tries to give an explanation of these results. The proposal of an integrative model to explain the clinical manifestations of preeclampsia is discussed. In this proposal we suggest that preeclampsia is caused by nutritional, environmental and genetic factors that lead to the creation of an imbalance between the free radicals nitric oxide, superoxide and peroxynitrate in the vascular endothelium. The adequate interpretation of this model would allow us to understand that the best way of preventing preeclampsia is the establishment of an adequate prenatal control system involving adequate antioxidant vitamin and mineral supplementation, adequate diagnosis and early treatment of asymptomatic urinary and vaginal infections. The role of infection in the genesis of preeclampsia needs to be studied in depth because it may involve a fundamental change in the prevention and treatment of preeclampsia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le centromère est la région chromosomique où le kinétochore s'assemble en mitose. Contrairement à certaines caractéristiques géniques, la séquence centromérique n'est ni conservée entre les espèces ni suffisante à la fonction centromérique. Il est donc bien accepté dans la littérature que le centromère est régulé épigénétiquement par une variante de l'histone H3, CENP-A. KNL-2, aussi connu sous le nom de M18BP1, ainsi que ces partenaires Mis18α et Mis18β sont des protéines essentielles pour l'incorporation de CENP-A nouvellement synthétisé aux centromères. Des évidences expérimentales démontrent que KNL-2, ayant un domaine de liaison à l'ADN nommé Myb, est la protéine la plus en amont pour l'incorporation de CENP-A aux centromères en phase G1. Par contre, sa fonction dans le processus d'incorporation de CENP-A aux centromères n'est pas bien comprise et ces partenaires de liaison ne sont pas tous connus. De nouveaux partenaires de liaison de KNL-2 ont été identifiés par des expériences d'immunoprécipitation suivies d'une analyse en spectrométrie de masse. Un rôle dans l'incorporation de CENP-A nouvellement synthétisé aux centromères a été attribué à MgcRacGAP, une des 60 protéines identifiées par l'essai. MgcRacGAP ainsi que les protéines ECT-2 (GEF) et la petite GTPase Cdc42 ont été démontrées comme étant requises pour la stabilité de CENP-A incorporé aux centromères. Ces différentes observations ont mené à l'identification d'une troisième étape au niveau moléculaire pour l'incorporation de CENP-A nouvellement synthétisé en phase G1, celle de la stabilité de CENP-A nouvellement incorporé aux centromères. Cette étape est importante pour le maintien de l'identité centromérique à chaque division cellulaire. Pour caractériser la fonction de KNL-2 lors de l'incorporation de CENP-A nouvellement synthétisé aux centromères, une technique de microscopie à haute résolution couplée à une quantification d'image a été utilisée. Les résultats générés démontrent que le recrutement de KNL-2 au centromère est rapide, environ 5 minutes après la sortie de la mitose. De plus, la structure du domaine Myb de KNL-2 provenant du nématode C. elegans a été résolue par RMN et celle-ci démontre un motif hélice-tour-hélice, une structure connue pour les domaines de liaison à l'ADN de la famille Myb. De plus, les domaines humain (HsMyb) et C. elegans (CeMyb) Myb lient l'ADN in vitro, mais aucune séquence n'est reconnue spécifiquement par ces domaines. Cependant, il a été possible de démontrer que ces deux domaines lient préférentiellement la chromatine CENP-A-YFP comparativement à la chromatine H2B-GFP par un essai modifié de SIMPull sous le microscope TIRF. Donc, le domaine Myb de KNL-2 est suffisant pour reconnaître de façon spécifique la chromatine centromérique. Finalement, l'élément reconnu par les domaines Myb in vitro a potentiellement été identifié. En effet, il a été démontré que les domaines HsMyb et CeMyb lient l'ADN simple brin in vitro. De plus, les domaines HsMyb et CeMyb ne colocalisent pas avec CENP-A lorsqu'exprimés dans les cellules HeLa, mais plutôt avec les corps nucléaires PML, des structures nucléaires composées d'ARN. Donc, en liant potentiellement les transcrits centromériques, les domaines Myb de KNL-2 pourraient spécifier l'incorporation de CENP-A nouvellement synthétisé uniquement aux régions centromériques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis begins with a review of basic elements of general theory of relativity (GTR) which forms the basis for the theoretical interpretation of the observations in cosmology. The first chapter also discusses the standard model in cosmology, namely the Friedmann model, its predictions and problems. We have also made a brief discussion on fractals and inflation of the early universe in the first chapter. In the second chapter we discuss the formulation of a new approach to cosmology namely a stochastic approach. In this model, the dynam ics of the early universe is described by a set of non-deterministic, Langevin type equations and we derive the solutions using the Fokker—Planck formalism. Here we demonstrate how the problems with the standard model, can be eliminated by introducing the idea of stochastic fluctuations in the early universe. Many recent observations indicate that the present universe may be approximated by a many component fluid and we assume that only the total energy density is conserved. This, in turn, leads to energy transfer between different components of the cosmic fluid and fluctuations in such energy transfer can certainly induce fluctuations in the mean to factor in the equation of state p = wp, resulting in a fluctuating expansion rate for the universe. The third chapter discusses the stochastic evolution of the cosmological parameters in the early universe, using the new approach. The penultimate chapter is about the refinements to be made in the present model, by means of a new deterministic model The concluding chapter presents a discussion on other problems with the conventional cosmology, like fractal correlation of galactic distribution. The author attempts an explanation for this problem using the stochastic approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of modeling solar energetic particle (SEP) events is important to both space weather research and forecasting, and yet it has seen relatively little progress. Most important SEP events are associated with coronal mass ejections (CMEs) that drive coronal and interplanetary shocks. These shocks can continuously produce accelerated particles from the ambient medium to well beyond 1 AU. This paper describes an effort to model real SEP events using a Center for Integrated Space weather Modeling (CISM) MHD solar wind simulation including a cone model of CMEs to initiate the related shocks. In addition to providing observation-inspired shock geometry and characteristics, this MHD simulation describes the time-dependent observer field line connections to the shock source. As a first approximation, we assume a shock jump-parameterized source strength and spectrum, and that scatter-free transport occurs outside of the shock source, thus emphasizing the role the shock evolution plays in determining the modeled SEP event profile. Three halo CME events on May 12, 1997, November 4, 1997 and December 13, 2006 are used to test the modeling approach. While challenges arise in the identification and characterization of the shocks in the MHD model results, this approach illustrates the importance to SEP event modeling of globally simulating the underlying heliospheric event. The results also suggest the potential utility of such a model for forcasting and for interpretation of separated multipoint measurements such as those expected from the STEREO mission.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Magnetic clouds (MCs) are a subset of interplanetary coronal mass ejections (ICMEs) which exhibit signatures consistent with a magnetic flux rope structure. Techniques for reconstructing flux rope orientation from single-point in situ observations typically assume the flux rope is locally cylindrical, e.g., minimum variance analysis (MVA) and force-free flux rope (FFFR) fitting. In this study, we outline a non-cylindrical magnetic flux rope model, in which the flux rope radius and axial curvature can both vary along the length of the axis. This model is not necessarily intended to represent the global structure of MCs, but it can be used to quantify the error in MC reconstruction resulting from the cylindrical approximation. When the local flux rope axis is approximately perpendicular to the heliocentric radial direction, which is also the effective spacecraft trajectory through a magnetic cloud, the error in using cylindrical reconstruction methods is relatively small (≈ 10∘). However, as the local axis orientation becomes increasingly aligned with the radial direction, the spacecraft trajectory may pass close to the axis at two separate locations. This results in a magnetic field time series which deviates significantly from encounters with a force-free flux rope, and consequently the error in the axis orientation derived from cylindrical reconstructions can be as much as 90∘. Such two-axis encounters can result in an apparent ‘double flux rope’ signature in the magnetic field time series, sometimes observed in spacecraft data. Analysing each axis encounter independently produces reasonably accurate axis orientations with MVA, but larger errors with FFFR fitting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Observations from the EISCAT VHF incoherent scatter radar system in northern Norway, during a run of the common programme CP-4, reveal a series of polewardpropagating F-region electron density enhancements in the pre-noon sector on 23 November 1999. These plasma density features, which are observed under conditions of a strongly southward interplanetary magnetic field, exhibit a recurrence rate of under 10 min and appear to emanate from the vicinity of the open/closed field-line boundary from where they travel into the polar cap; this is suggestive of their being an ionospheric response to transient reconnection at the dayside magnetopause (flux transfer events). Simultaneous with the density structures detected by the VHF radar, polewardmoving radar auroral forms (PMRAFs) are observed by the Finland HF coherent scatter radar. It is thought that PMRAFs, which are commonly observed near local noon by HF radars, are also related to flux transfer events, although the specific mechanism for the generation of the field-aligned irregularities within such features is not well understood. The HF observations suggest, that for much of their existence, the PMRAFs trace fossil signatures of transient reconnection rather than revealing the footprint of active reconnection itself; this is evidenced not least by the fact that the PMRAFs become narrower in spectral width as they evolve away from the region of more classical, broad cusp scatter in which they originate. Interpretation of the HF observations with reference to the plasma parameters diagnosed by the incoherent scatter radar suggests that as the PMRAFs migrate away from the reconnection site and across the polar cap, entrained in the ambient antisunward flow, the irregularities therein are generated by the presence of gradients in the electron density, with these gradients having been formed through structuring of the ionosphere in the cusp region in response to transient reconnection.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optical observations of a dayside auroral brightening sequence, by means of all-sky TV cameras and meridian scanning photometers, have been combined with EISCAT ion drift observations within the same invariant latitude-MLT sector. The observations were made during a January 1989 campaign by utilizing the high F region ion densities during the maximum phase of the solar cycle. The characteristic intermittent optical events, covering ∼300 km in east-west extent, move eastward (antisunward) along the poleward boundary of the persistent background aurora at velocities of ∼1.5 km s−1 and are associated with ion flows which swing from eastward to westward, with a subsequent return to eastward, during the interval of a few minutes when there is enhanced auroral emission within the radar field of view. The breakup of discrete auroral forms occurs at the reversal (negative potential) that forms between eastward plasma flow, maximizing near the persistent arc poleward boundary, and strong transient westward flow to the south. The reported events, covering a 35 min interval around 1400 MLT, are embedded within a longer period of similar auroral activity between 0830 (1200 MLT) and 1300 UT (1600 MLT). These observations are discussed in relation to recent models of boundary layer plasma dynamics and the associated magnetosphere-ionosphere coupling. The ionospheric events may correspond to large-scale wave like motions of the low-latitude boundary layer (LLBL)/plasma sheet (PS) boundary. On the basis of this interpretation the observed spot size, speed and repetition period (∼10 min) give a wavelength (the distance between spots) of ∼900 km in the present case. The events can also be explained as ionospheric signatures of newly opened flux tubes associated with reconnection bursts at the magnetopause near 1400 MLT. We also discuss these data in relation to random, patchy reconnection (as has recently been invoked to explain the presence of the sheathlike plasma on closed field lines in the LLBL). In view of the lack of IMF data, and the existing uncertainty on the location of the open-closed field line boundary relative to the optical events, an unambiguous discrimination between the different alternatives is not easily obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conjunctive measurements made by the Dynamics Explorer 1 and 2 spacecraft on October 22, 1981, under conditions of southward IMF, suggest the existence of a cusp ion injection from a region at the magnetopause with a scale size of ∼ 1/2 to 1 R E . Current signatures observed by the LAPI and MAGB instruments on board DE-2 indicate the existence of a rotation in the magnetic field that is consistent with a filamentary current system. The observed current structure can be interpreted as the ionospheric signature of a flux transfer event (FTE). In addition to this large-scale current structure there exist three small-scale filamentary current pairs. These current pairs close locally and thus, if our interpretation of this event as an FTE is correct, represent the first reported observations of FTE interior structure at low-altitudes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Calculations using a numerical model of the convection dominated high latitude ionosphere are compared with observations made by EISCAT as part of the UK-POLAR Special Programme. The data used were for 24–25 October 1984, which was characterized by an unusually steady IMF, with Bz < 0 and By > 0; in the calculations it was assumed that a steady IMF implies steady convection conditions. Using the electric field models of Heppner and Maynard (1983) appropriate to By > 0 and precipitation data taken from Spiroet al. (1982), we calculated the velocities and electron densities appropriate to the EISCAT observations. Many of the general features of the velocity data were reproduced by the model. In particular, the phasing of the change from eastward to westward flow in the vicinity of the Harang discontinuity, flows near the dayside throat and a region of slow flow at higher latitudes near dusk were well reproduced. In the afternoon sector modelled velocity values were significantly less than those observed. Electron density calculations showed good agreement with EISCAT observations near the F-peak, but compared poorly with observations near 211 km. In both cases, the greatest disagreement occurred in the early part of the observations, where the convection pattern was poorly known and showed some evidence of long term temporal change. Possible causes for the disagreement between observations and calculations are discussed and shown to raise interesting and, as yet, unresolved questions concerning the interpretation of the data. For the data set used, the late afternoon dip in electron density observed near the F-peak and interpreted as the signature of the mid-latitude trough is well reproduced by the calculations. Calculations indicate that it does not arise from long residence times of plasma on the nightside, but is the signature of a gap between two major ionization sources, viz. photoionization and particle precipitation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Field observations of new particle formation and the subsequent particle growth are typically only possible at a fixed measurement location, and hence do not follow the temporal evolution of an air parcel in a Lagrangian sense. Standard analysis for determining formation and growth rates requires that the time-dependent formation rate and growth rate of the particles are spatially invariant; air parcel advection means that the observed temporal evolution of the particle size distribution at a fixed measurement location may not represent the true evolution if there are spatial variations in the formation and growth rates. Here we present a zero-dimensional aerosol box model coupled with one-dimensional atmospheric flow to describe the impact of advection on the evolution of simulated new particle formation events. Wind speed, particle formation rates and growth rates are input parameters that can vary as a function of time and location, using wind speed to connect location to time. The output simulates measurements at a fixed location; formation and growth rates of the particle mode can then be calculated from the simulated observations at a stationary point for different scenarios and be compared with the ‘true’ input parameters. Hence, we can investigate how spatial variations in the formation and growth rates of new particles would appear in observations of particle number size distributions at a fixed measurement site. We show that the particle size distribution and growth rate at a fixed location is dependent on the formation and growth parameters upwind, even if local conditions do not vary. We also show that different input parameters used may result in very similar simulated measurements. Erroneous interpretation of observations in terms of particle formation and growth rates, and the time span and areal extent of new particle formation, is possible if the spatial effects are not accounted for.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goals of this project are manifold. First, I will attempt to discover evidence in the book of Joshua that will lend support to the theory of a Josianic influence enacted in the 7th century BCE. I will do this through an analysis of the rhetoric in selected stories in Joshua using the ideas of Foucault. Second, I will address the significance of this kind of analysis as having potential for the emancipation of oppressed peoples. The first section delineates scholarly discussion on the literary and historical context of the book of Joshua. These scholarly works are foundational to this study because they situate the text within a particular period in history and within certain ideologies. Chapter 2 discusses the work of Foucault and how his ideas will be applied to particular texts of the book of Joshua. The focused analysis of these texts occurs within chapters 3 to 6. Chapter 7 presents an integration of the observations made through the analyses performed in the previous chapters and expands on the ethical significance of this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)