790 resultados para Computing clouds
Resumo:
We extend the method of Cassels for computing the Cassels-Tate pairing on the 2-Selmer group of an elliptic curve, to the case of 3-Selmer groups. This requires significant modifications to both the local and global parts of the calculation. Our method is practical in sufficiently small examples, and can be used to improve the upper bound for the rank of an elliptic curve obtained by 3-descent.
Resumo:
This paper describes new advances in the exploitation of oxygen A-band measurements from POLDER3 sensor onboard PARASOL, satellite platform within the A-Train. These developments result from not only an account of the dependence of POLDER oxygen parameters to cloud optical thickness τ and to the scene's geometrical conditions but also, and more importantly, from the finer understanding of the sensitivity of these parameters to cloud vertical extent. This sensitivity is made possible thanks to the multidirectional character of POLDER measurements. In the case of monolayer clouds that represent most of cloudy conditions, new oxygen parameters are obtained and calibrated from POLDER3 data colocalized with the measurements of the two active sensors of the A-Train: CALIOP/CALIPSO and CPR/CloudSat. From a parameterization that is (μs, τ) dependent, with μs the cosine of the solar zenith angle, a cloud top oxygen pressure (CTOP) and a cloud middle oxygen pressure (CMOP) are obtained, which are estimates of actual cloud top and middle pressures (CTP and CMP). Performances of CTOP and CMOP are presented by class of clouds following the ISCCP classification. In 2008, the coefficient of the correlation between CMOP and CMP is 0.81 for cirrostratus, 0.79 for stratocumulus, 0.75 for deep convective clouds. The coefficient of the correlation between CTOP and CTP is 0.75, 0.73, and 0.79 for the same cloud types. The score obtained by CTOP, defined as the confidence in the retrieval for a particular range of inferred value and for a given error, is higher than the one of MODIS CTP estimate. Scores of CTOP are the highest for bin value of CTP superior in numbers. For liquid (ice) clouds and an error of 30 hPa (50 hPa), the score of CTOP reaches 50% (70%). From the difference between CTOP and CMOP, a first estimate of the cloud vertical extent h is possible. A second estimate of h comes from the correlation between the angular standard deviation of POLDER oxygen pressure σPO2 and the cloud vertical extent. This correlation is studied in detail in the case of liquid clouds. It is shown to be spatially and temporally robust, except for clouds above land during winter months. The analysis of the correlation's dependence on the scene's characteristics leads to a parameterization providing h from σPO2. For liquid water clouds above ocean in 2008, the mean difference between the actual cloud vertical extent and the one retrieved from σPO2 (from the pressure difference) is 5 m (−12 m). The standard deviation of the mean difference is close to 1000 m for the two methods. POLDER estimates of the cloud geometrical thickness obtain a global score of 50% confidence for a relative error of 20% (40%) of the estimate for ice (liquid) clouds over ocean. These results need to be validated outside of the CALIPSO/CloudSat track.
Resumo:
Aerosol properties above clouds have been retrieved over the South East Atlantic Ocean during the fire season 2006 using satellite observations from POLDER (Polarization and Directionality of Earth Reflectances). From June to October, POLDER has observed a mean Above-Cloud Aerosol Optical Thickness (ACAOT) of 0.28 and a mean Above-Clouds Single Scattering Albedo (ACSSA) of 0.87 at 550 nm. These results have been used to evaluate the simulation of aerosols above clouds in 5 AeroCom (Aerosol Comparisons between Observations and Models) models (GOCART, HadGEM3, ECHAM5-HAM2, OsloCTM2 and SPRINTARS). Most models do not reproduce the observed large aerosol load episodes. The comparison highlights the importance of the injection height and the vertical transport parameterizations to simulate the large ACAOT observed by POLDER. Furthermore, POLDER ACSSA is best reproduced by models with a high imaginary part of black carbon refractive index, in accordance with recent recommendations.
Resumo:
Observations obtained during an 8-month deployment of AMF2 in a boreal environment in Hyytiälä, Finland, and the 20-year comprehensive in-situ data from SMEAR-II station enable the characterization of biogenic aerosol, clouds and precipitation, and their interactions. During “Biogenic Aerosols - Effects on Clouds and Climate (BAECC)”, the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM) Program deployed the ARM 2nd Mobile Facility (AMF2) to Hyytiälä, Finland, for an 8-month intensive measurement campaign from February to September 2014. The primary research goal is to understand the role of biogenic aerosols in cloud formation. Hyytiälä is host to SMEAR-II (Station for Measuring Forest Ecosystem-Atmosphere Relations), one of the world’s most comprehensive surface in-situ observation sites in a boreal forest environment. The station has been measuring atmospheric aerosols, biogenic emissions and an extensive suite of parameters relevant to atmosphere-biosphere interactions continuously since 1996. Combining vertical profiles from AMF2 with surface-based in-situ SMEAR-II observations allow the processes at the surface to be directly related to processes occurring throughout the entire tropospheric column. Together with the inclusion of extensive surface precipitation measurements, and intensive observation periods involving aircraft flights and novel radiosonde launches, the complementary observations provide a unique opportunity for investigating aerosol-cloud interactions, and cloud-to-precipitation processes, in a boreal environment. The BAECC dataset provides opportunities for evaluating and improving models of aerosol sources and transport, cloud microphysical processes, and boundary-layer structures. In addition, numerical models are being used to bridge the gap between surface-based and tropospheric observations.
Resumo:
The Southern Ocean is a critical region for global climate, yet large cloud and solar radiation biases over the Southern Ocean are a long-standing problem in climate models and are poorly understood, leading to biases in simulated sea surface temperatures. This study shows that supercooled liquid clouds are central to understanding and simulating the Southern Ocean environment. A combination of satellite observational data and detailed radiative transfer calculations is used to quantify the impact of cloud phase and cloud vertical structure on the reflected solar radiation in the Southern Hemisphere summer. It is found that clouds with supercooled liquid tops dominate the population of liquid clouds. The observations show that clouds with supercooled liquid tops contribute between 27% and 38% to the total reflected solar radiation between 40° and 70°S, and climate models are found to poorly simulate these clouds. The results quantify the importance of supercooled liquid clouds in the Southern Ocean environment and highlight the need to improve understanding of the physical processes that control these clouds in order to improve their simulation in numerical models. This is not only important for improving the simulation of present-day climate and climate variability, but also relevant for increasing confidence in climate feedback processes and future climate projections.
Resumo:
The diffusion of astrophysical magnetic fields in conducting fluids in the presence of turbulence depends on whether magnetic fields can change their topology via reconnection in highly conducting media. Recent progress in understanding fast magnetic reconnection in the presence of turbulence reassures that the magnetic field behavior in computer simulations and turbulent astrophysical environments is similar, as far as magnetic reconnection is concerned. This makes it meaningful to perform MHD simulations of turbulent flows in order to understand the diffusion of magnetic field in astrophysical environments. Our studies of magnetic field diffusion in turbulent medium reveal interesting new phenomena. First of all, our three-dimensional MHD simulations initiated with anti-correlating magnetic field and gaseous density exhibit at later times a de-correlation of the magnetic field and density, which corresponds well to the observations of the interstellar media. While earlier studies stressed the role of either ambipolar diffusion or time-dependent turbulent fluctuations for de-correlating magnetic field and density, we get the effect of permanent de-correlation with one fluid code, i.e., without invoking ambipolar diffusion. In addition, in the presence of gravity and turbulence, our three-dimensional simulations show the decrease of the magnetic flux-to-mass ratio as the gaseous density at the center of the gravitational potential increases. We observe this effect both in the situations when we start with equilibrium distributions of gas and magnetic field and when we follow the evolution of collapsing dynamically unstable configurations. Thus, the process of turbulent magnetic field removal should be applicable both to quasi-static subcritical molecular clouds and cores and violently collapsing supercritical entities. The increase of the gravitational potential as well as the magnetization of the gas increases the segregation of the mass and magnetic flux in the saturated final state of the simulations, supporting the notion that the reconnection-enabled diffusivity relaxes the magnetic field + gas system in the gravitational field to its minimal energy state. This effect is expected to play an important role in star formation, from its initial stages of concentrating interstellar gas to the final stages of the accretion to the forming protostar. In addition, we benchmark our codes by studying the heat transfer in magnetized compressible fluids and confirm the high rates of turbulent advection of heat obtained in an earlier study.
Resumo:
The determination of accurate chemical abundances of planetary nebulae (PN) in different galaxies allows us to obtain important constraints on chemical evolution models for these systems. We have a long-term program to derive abundances in the galaxies of the Local Group, particularly the Large and Small Magellanic Clouds. In this work, we present our new results on these objects and discuss their implications in view of recent abundance determinations in the literature. In particular, we obtain distance-independent correlations involving He, N, O, Ne, S, and Ar, and compare the results with data from our own Galaxy and other galaxies in the Local Group. As a result of our observational program, we have a large database of PN in the Galaxy and the Magellanic Clouds, so that we can obtain reliable constraints on the nucleosynthesis processes in the progenitor stars in galaxies of different metallicities.
Resumo:
In this work, considering the impact of a supernova remnant (SNR) with a neutral magnetized cloud we derived analytically a set of conditions that are favourable for driving gravitational instability in the cloud and thus star formation. Using these conditions, we have built diagrams of the SNR radius, R(SNR), versus the initial cloud density, n(c), that constrain a domain in the parameter space where star formation is allowed. This work is an extension to previous study performed without considering magnetic fields (Melioli et al. 2006, hereafter Paper I). The diagrams are also tested with fully three-dimensional MHD radiative cooling simulations involving a SNR and a self-gravitating cloud and we find that the numerical analysis is consistent with the results predicted by the diagrams. While the inclusion of a homogeneous magnetic field approximately perpendicular to the impact velocity of the SNR with an intensity similar to 1 mu G within the cloud results only a small shrinking of the star formation zone in the diagram relative to that without magnetic field, a larger magnetic field (similar to 10 mu G) causes a significant shrinking, as expected. Though derived from simple analytical considerations these diagrams provide a useful tool for identifying sites where star formation could be triggered by the impact of a supernova blast wave. Applications of them to a few regions of our own Galaxy (e.g. the large CO shell in the direction of Cassiopeia, and the Edge Cloud 2 in the direction of the Scorpious constellation) have revealed that star formation in those sites could have been triggered by shock waves from SNRs for specific values of the initial neutral cloud density and the SNR radius. Finally, we have evaluated the effective star formation efficiency for this sort of interaction and found that it is generally smaller than the observed values in our own Galaxy (SFE similar to 0.01-0.3). This result is consistent with previous work in the literature and also suggests that the mechanism presently investigated, though very powerful to drive structure formation, supersonic turbulence and eventually, local star formation, does not seem to be sufficient to drive global star formation in normal star-forming galaxies, not even when the magnetic field in the neutral clouds is neglected.
Resumo:
K-band spectra of young stellar candidates in four Southern hemisphere clusters have been obtained with the Gemini Near-Infrared Spectrograph in Gemini South. The clusters are associated with IRAS sources that have colours characteristic of ultracompact H II regions. Spectral types were obtained by comparison of the observed spectra with those of a near-infrared (NIR) library; the results include the spectral classification of nine massive stars and seven objects confirmed as background late-type stars. Two of the studied sources have K-band spectra compatible with those characteristic of very hot stars, as inferred from the presence of C IV, N III and N V emission lines at 2.078, 2.116 and 2.100 mu m, respectively. One of them, I16177_IRS1, has a K-band spectrum similar to that of Cyg OB2 7, an O3If* supergiant star. The nebular K-band spectrum of the associated Ultra-Compact (UC) H II region shows the s-process [Kr III] and [Se IV] high excitation emission lines, previously identified only in planetary nebula. One young stellar object was found in each cluster, associated with either the main IRAS source or a nearby resolved Midecourse Space eXperiment (MSX) component, confirming the results obtained from previous NIR photometric surveys. The distances to the stars were derived from their spectral types and previously determined JHK magnitudes; they agree well with the values obtained from the kinematic method, except in the case of IRAS 15408-5356, for which the spectroscopic distance is about a factor of 2 smaller than the kinematic value.
Resumo:
Emission line ratios have been essential for determining physical parameters such as gas temperature and density in astrophysical gaseous nebulae. With the advent of panoramic spectroscopic devices, images of regions with emission lines related to these physical parameters can, in principle, also be produced. We show that, with observations from modern instruments, it is possible to transform images taken from density-sensitive forbidden lines into images of emission from high- and low-density clouds by applying a transformation matrix. In order to achieve this, images of the pairs of density-sensitive lines as well as the adjacent continuum have to be observed and combined. We have computed the critical densities for a series of pairs of lines in the infrared, optical, ultraviolet and X-rays bands, and calculated the pair line intensity ratios in the high- and low-density limit using a four- and five-level atom approximation. In order to illustrate the method, we applied it to Gemini Multi-Object Spectrograph (GMOS) Integral Field Unit (GMOS-IFU) data of two galactic nuclei. We conclude that this method provides new information of astrophysical interest, especially for mapping low- and high-density clouds; for this reason, we call it `the ld/hd imaging method`.
Resumo:
The Amazon is one of the few continental regions where atmospheric aerosol particles and their effects on climate are not dominated by anthropogenic sources. During the wet season, the ambient conditions approach those of the pristine pre-industrial era. We show that the fine submicrometer particles accounting for most cloud condensation nuclei are predominantly composed of secondary organic material formed by oxidation of gaseous biogenic precursors. Supermicrometer particles, which are relevant as ice nuclei, consist mostly of primary biological material directly released from rainforest biota. The Amazon Basin appears to be a biogeochemical reactor, in which the biosphere and atmospheric photochemistry produce nuclei for clouds and precipitation sustaining the hydrological cycle. The prevailing regime of aerosol-cloud interactions in this natural environment is distinctly different from polluted regions.
Resumo:
Cloud computing innebär användning av datorresurser som är tillgängliga via ett nätverk, oftast Internet och är ett område som har vuxit fram i snabb takt under de senaste åren. Allt fler företag migrerar hela eller delar av sin verksamhet till molnet. Sogeti i Borlänge har behov av att migrera sina utvecklingsmiljöer till en molntjänst då drift och underhåll av dessa är kostsamma och tidsödande. Som Microsoftpartners vill Sogeti använda Microsoft tjänst för cloud computing, Windows Azure, för detta syfte. Migration till molnet är ett nytt område för Sogeti och de har inga beskrivningar för hur en sådan process går till. Vårt uppdrag var att utveckla ett tillvägagångssätt för migration av en IT-lösning till molnet. En del av uppdraget blev då att kartlägga cloud computing, dess beståndsdelar samt vilka för- och nackdelar som finns, vilket har gjort att vi har fått grundläggande kunskap i ämnet. För att utveckla ett tillvägagångssätt för migration har vi utfört flera migrationer av virtuella maskiner till Windows Azure och utifrån dessa migrationer, litteraturstudier och intervjuer dragit slutsatser som mynnat ut i ett generellt tillvägagångssätt för migration till molnet. Resultatet har visat att det är svårt att göra en generell men samtidigt detaljerad beskrivning över ett tillvägagångssätt för migration, då scenariot ser olika ut beroende på vad som ska migreras och vilken typ av molntjänst som används. Vi har dock utifrån våra erfarenheter från våra migrationer, tillsammans med litteraturstudier, dokumentstudier och intervjuer lyft vår kunskap till en generell nivå. Från denna kunskap har vi sammanställt ett generellt tillvägagångssätt med större fokus på de förberedande aktiviteter som en organisation bör genomföra innan migration. Våra studier har även resulterat i en fördjupad beskrivning av cloud computing. I vår studie har vi inte sett att någon tidigare har beskrivit kritiska framgångsfaktorer i samband med cloud computing. I vårt empiriska arbete har vi dock identifierat tre kritiska framgångsfaktorer för cloud computing och i och med detta täckt upp en del av kunskapsgapet där emellan.
Resumo:
Learning from anywhere anytime is a contemporary phenomenon in the field of education that is thought to be flexible, time and cost saving. The phenomenon is evident in the way computer technology mediates knowledge processes among learners. Computer technology is however, in some instances, faulted. There are studies that highlight drawbacks of computer technology use in learning. In this study we aimed at conducting a SWOT analysis on ubiquitous computing and computer-mediated social interaction and their affect on education. Students and teachers were interviewed on the mentioned concepts using focus group interviews. Our contribution in this study is, identifying what teachers and students perceive to be the strength, weaknesses, opportunities and threats of ubiquitous computing and computer-mediated social interaction in education. We also relate the findings with literature and present a common understanding on the SWOT of these concepts. Results show positive perceptions. Respondents revealed that ubiquitous computing and computer-mediated social interaction are important in their education due to advantages such as flexibility, efficiency in terms of cost and time, ability to acquire computer skills. Nevertheless disadvantages where also mentioned for example health effects, privacy and security issues, noise in the learning environment, to mention but a few. This paper gives suggestions on how to overcome threats mentioned.
Resumo:
The ever increasing spurt in digital crimes such as image manipulation, image tampering, signature forgery, image forgery, illegal transaction, etc. have hard pressed the demand to combat these forms of criminal activities. In this direction, biometrics - the computer-based validation of a persons' identity is becoming more and more essential particularly for high security systems. The essence of biometrics is the measurement of person’s physiological or behavioral characteristics, it enables authentication of a person’s identity. Biometric-based authentication is also becoming increasingly important in computer-based applications because the amount of sensitive data stored in such systems is growing. The new demands of biometric systems are robustness, high recognition rates, capability to handle imprecision, uncertainties of non-statistical kind and magnanimous flexibility. It is exactly here that, the role of soft computing techniques comes to play. The main aim of this write-up is to present a pragmatic view on applications of soft computing techniques in biometrics and to analyze its impact. It is found that soft computing has already made inroads in terms of individual methods or in combination. Applications of varieties of neural networks top the list followed by fuzzy logic and evolutionary algorithms. In a nutshell, the soft computing paradigms are used for biometric tasks such as feature extraction, dimensionality reduction, pattern identification, pattern mapping and the like.