990 resultados para SMALL-MAGELLANIC-CLOUD
Resumo:
Nine H II regions of the LMC were mapped in (CO)-C-13(1-0) and three in (CO)-C-12(1-0) to study the physical properties of the interstellar medium in the Magellanic Clouds. For N113 the molecular core is found to have a peak position which differs from that of the associated H II region by 20 ''. Toward this molecular core the (CO)-C-12 and (CO)-C-13 peak T-MB line temperatures of 7.3 K and 1.2 K are the highest so far found in the Magellanic Clouds. The molecular concentrations associated with N113, N44BC, N159HW, and N214DE in the LMC and LIRS 36 in the SMC were investigated in a variety of molecular species to study the chemical properties of the interstellar medium. I(HCO+)/I(HCN) and I(HCN)/I(HNC) intensity ratios as well as lower limits to the I((CO)-C-13)/I((CO)-O-18) ratio were derived for the rotational 1-0 transitions. Generally, HCO+ is stronger than HCN, and HCN is stronger than HNC. The high relative HCO+ intensities are consistent with a high ionization flux from supernovae remnants and young stars, possibly coupled with a large extent of the HCO+ emission region. The bulk of the HCN arises from relatively compact dense cloud cores. Warm or shocked gas enhances HCN relative to HNC. From chemical model calculations it is predicted that I(HCN)/I(HNC) close to one should be obtained with higher angular resolution (less than or similar to 30 '') toward the cloud cores. Comparing virial masses with those obtained from the integrated CO intensity provides an H-2 mass-to-CO luminosity conversion factor of 1.8 x 10(20) mol cm(-2) (K km s(-1))(-1) for N113 and 2.4 x 10(20) mol cm(-2) (K km s(-1))(-1) for N44BC. This is consistent with values derived for the Galactic disk.
Resumo:
Current trends in broadband mobile networks are addressed towards the placement of different capabilities at the edge of the mobile network in a centralised way. On one hand, the split of the eNB between baseband processing units and remote radio headers makes it possible to process some of the protocols in centralised premises, likely with virtualised resources. On the other hand, mobile edge computing makes use of processing and storage capabilities close to the air interface in order to deploy optimised services with minimum delay. The confluence of both trends is a hot topic in the definition of future 5G networks. The full centralisation of both technologies in cloud data centres imposes stringent requirements to the fronthaul connections in terms of throughput and latency. Therefore, all those cells with limited network access would not be able to offer these types of services. This paper proposes a solution for these cases, based on the placement of processing and storage capabilities close to the remote units, which is especially well suited for the deployment of clusters of small cells. The proposed cloud-enabled small cells include a highly efficient microserver with a limited set of virtualised resources offered to the cluster of small cells. As a result, a light data centre is created and commonly used for deploying centralised eNB and mobile edge computing functionalities. The paper covers the proposed architecture, with special focus on the integration of both aspects, and possible scenarios of application.
Resumo:
Aim: To quantify the consequences of major threats to biodiversity, such as climate and land-use change, it is important to use explicit measures of species persistence, such as extinction risk. The extinction risk of metapopulations can be approximated through simple models, providing a regional snapshot of the extinction probability of a species. We evaluated the extinction risk of three species under different climate change scenarios in three different regions of the Mexican cloud forest, a highly fragmented habitat that is particularly vulnerable to climate change. Location: Cloud forests in Mexico. Methods: Using Maxent, we estimated the potential distribution of cloud forest for three different time horizons (2030, 2050 and 2080) and their overlap with protected areas. Then, we calculated the extinction risk of three contrasting vertebrate species for two scenarios: (1) climate change only (all suitable areas of cloud forest through time) and (2) climate and land-use change (only suitable areas within a currently protected area), using an explicit patch-occupancy approximation model and calculating the joint probability of all populations becoming extinct when the number of remaining patches was less than five. Results: Our results show that the extent of environmentally suitable areas for cloud forest in Mexico will sharply decline in the next 70 years. We discovered that if all habitat outside protected areas is transformed, then only species with small area requirements are likely to persist. With habitat loss through climate change only, high dispersal rates are sufficient for persistence, but this requires protection of all remaining cloud forest areas. Main conclusions: Even if high dispersal rates mitigate the extinction risk of species due to climate change, the synergistic impacts of changing climate and land use further threaten the persistence of species with higher area requirements. Our approach for assessing the impacts of threats on biodiversity is particularly useful when there is little time or data for detailed population viability analyses. © 2013 John Wiley & Sons Ltd.
Resumo:
This study proposes that the adoption process of complex-wide systems (e.g. cloud ERP) should be observed as multi-stage actions. Two theoretical lenses were utilised for this study with critical adoption factors identified through the theory of planned behaviour and the progression of each adoption factor observed through Ettlie's (1980) multi-stage adoption model. Together with a survey method, this study has employed data gathered from 162 decision-makers of small and medium-sized enterprises (SMEs). Using both linear and non-linear approaches for the data analysis, the study findings have shown that the level of importance for adoption factors changes across different adoption stages.
Resumo:
This research explored how small and medium enterprises can achieve success with software as a service (SaaS) applications from cloud. Based upon an empirical investigation of six growth oriented and early technology adopting small and medium enterprises, this study proposes a SaaS for small and medium enterprise success model with two approaches: one for basic and one for advanced benefits. The basic model explains the effective use of SaaS for achieving informational and transactional benefits. The advanced model explains the enhanced use of software as a service for achieving strategic and transformational benefits. Both models explicate the information systems capabilities and organizational complementarities needed for achieving success with SaaS.
Resumo:
The advent of cloud technology involving low subscription overheads cost has provided small and medium-sized enterprises (SMEs) with the opportunity to adopt new cloud-based corporate-wide systems (i.e., cloud ERP). This technology, operating through subscription-based services, has now provided SMEs with a complete range of IT applications that were once restricted to larger organisations. As anecdotal evidences suggest, SMEs are increasingly adopting cloud-based ERP software. The selection of an ERP is a complex process involving multiple stages and stakeholders, suggesting the importance of closer examination of cloud ERP adoption in SMEs. Yet, prior studies have predominantly treated technology adoption as a single activity and largely ignored the issue of ERP adoption in SMEs. Understanding of the process nature of the adoption and the factors that are important in each stage of the adoption potentially may result in guiding SMEs to make well-informed decisions throughout the ERP selection process. Thus, our study proposes that the adoption of cloud ERP should be examined as a multi-stage process. Using the Theory of Planned Behaviour (TPB) and Ettlie’s adoption stages, as well as employing data gathered from 162 owners of SMEs, our findings show that the factors that influence the intention to adopt cloud ERP vary significantly across adoptive stages.
Resumo:
It is now clearly understood that atmospheric aerosols have a significant impact on climate due to their important role in modifying the incoming solar and outgoing infrared radiation. The question of whether aerosol cools (negative forcing) or warms (positive forcing) the planet depends on the relative dominance of absorbing aerosols. Recent investigations over the tropical Indian Ocean have shown that, irrespective of the comparatively small percentage contribution in optical depth (similar to11%), soot has an important role in the overall radiative forcing. However, when the amount of absorbing aerosols such as soot are significant, aerosol optical depth and chemical composition are not the only determinants of aerosol climate effects, but the altitude of the aerosol layer and the altitude and type of clouds are also important. In this paper, the aerosol forcing in the presence of clouds and the effect of different surface types (ocean, soil, vegetation, and different combinations of soil and vegetation) are examined based on model simulations, demonstrating that aerosol forcing changes sign from negative (cooling) to positive (warming) when reflection from below (either due to land or clouds) is high.
Resumo:
STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.
It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.
In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.
Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.
Resumo:
Migrating to cloud computing is one of the current enterprise challenges. This technology provides a new paradigm based on "on-demand payment" for information and communication technologies. In this sense, the small and medium enterprise is supposed to be the most interested, since initial investments are avoided and the technology allows gradual implementation. However, even if the characteristics and capacities have been widely discussed, entry into the cloud is still lacking in terms of practical, real frameworks. This paper aims at filling this gap, presenting a real tool already implemented and tested, which can be used as a cloud computing adoption decision tool. This tool uses diagnosis based on specific questions to gather the required information and subsequently provide the user with valuable information to deploy the business within the cloud, specifically in the form of Software as a Service (SaaS) solutions. This information allows the decision makers to generate their particular Cloud Road. A pilot study has been carried out with enterprises at a local level with a two-fold objective: To ascertain the degree of knowledge on cloud computing and to identify the most interesting business areas and their related tools for this technology. As expected, the results show high interest and low knowledge on this subject and the tool presented aims to readdress this mismatch, insofar as possible. Copyright: © 2015 Bildosola et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Resumo:
The surge of the Internet traffic with exabytes of data flowing over operators mobile networks has created the need to rethink the paradigms behind the design of the mobile network architecture. The inadequacy of the 4G UMTS Long term Evolution (LTE) and even of its advanced version LTE-A is evident, considering that the traffic will be extremely heterogeneous in the near future and ranging from 4K resolution TV to machine-type communications. To keep up with these changes, academia, industries and EU institutions have now engaged in the quest for new 5G technology. In this paper we present the innovative system design, concepts and visions developed by the 5G PPP H2020 project SESAME (Small cEllS coordinAtion for Multi-tenancy and Edge services). The innovation of SESAME is manifold: i) combine the key 5G small cells with cloud technology, ii) promote and develop the concept of Small Cellsas- a-Service (SCaaS), iii) bring computing and storage power at the mobile network edge through the development of nonx86 ARM technology enabled micro-servers, and iv) address a large number of scenarios and use cases applying mobile edge computing. Topics:
Resumo:
We describe medium-resolution spectroscopic observations taken with the ESO Multi-Mode Instrument (EMMI) in the CaII K line (lambda air = 3933.661 angstrom) towards 7 QSOs located in the line-of-sight to the Magellanic Bridge. At a spectral resolution R =lambda/Delta lambda = 6000, five of the sightlines have a signal-to-noise ( S/N) ratio of similar to 20 or higher. Definite Ca absorption due to Bridge material is detected towards 3 objects, with probable detection towards two other sightlines. Gas-phase CaII K Bridge and Milky Way abundances or lower limits for the all sightlines are estimated by the use of Parkes 21-cm H. emission line data. These data only have a spatial resolution of 14 arcmin compared with the optical observations which have milli-arcsecond resolution. With this caveat, for the three objects with sound CaII K detections, we find that the ionic abundance of CaII K relative to HI, A = log( N( CaK)/ N( HI)) for low- velocity Galactic gas ranges from - 8.3 to - 8.8 dex, with HI column densities varying from 3- 6 x 10(20) cm(-2). For Magellanic Bridge gas, the values of A are similar to 0.5 dex higher, ranging from similar to- 7.8 to - 8.2 dex, with N( HI) = 1- 5 x 1020 cm(-2). Higher values of A correspond to lower values of N( HI), although numbers are small. For the sightline towards B 0251 - 675, the Bridge gas has two different velocities, and in only one of these is CaII tentatively detected, perhaps indicating gas of a different origin or present-day characteristics ( such as dust content), although this conclusion is uncertain and there is the possibility that one of the components could be related to the Magellanic Stream. Higher signal-to-noise CaII K data and higher resolution H. data are required to determine whether A changes with N( HI) over the Bridge and if the implied difference in the metalicity of the two Bridge components towards B 0251-675 is real.
Resumo:
High-resolution Hubble Space Telescope ultraviolet spectra for five B-type stars in the Magellanic Bridge and in the Large (LMC) and Small (SMC) Magellanic Clouds have been analysed to estimate their iron abundances. Those for the Clouds are lower than estimates obtained from late-type stars or the optical lines in B-type stars by approximately 0.5 dex. This may be due to systematic errors possibly arising from non-local thermodynamic equilibrium (non-LTE) effects or from errors in the atomic data, as similar low Fe abundances have previously been reported from the analysis of the ultraviolet spectra of Galactic early-type stars. The iron abundance estimates for all three Bridge targets appear to be significantly lower than those found for the SMC and LMC by approximately -0.5 and -0.8 dex, respectively, and these differential results should not be affected by any systematic errors present in the absolute abundance estimates. These differential iron abundance estimates are consistent with the underabundances for C, N, O, Mg and Si of approximately -1.1 dex relative to our Galaxy previously found in our Bridge targets. The implications of these very low metal abundances for the Magellanic Bridge are discussed in terms of metal deficient material being stripped from the SMC.
Resumo:
Experimental data are presented for liquid-liquid equilibria of mixtures of the room-temperature ionic liquid 1-ethyl-3-methyl-imidazolium bis(trifluoromethylsulfonyl)imide ([C2MIM][NTf2]) with the three alcohols propan-1-ol, butan-1-ol, and pentan-1-ol and for the 1-butyl-3-methyl-imidazolium bis(trifluoromethylsulfonyl) imide ([C4MIM][NTf2]) with cyclohexanol and 1,2-hexanediol in the temperature range of 275 K to 345 K at ambient pressure. The synthetic method has been used. Cloud points at a given composition were observed by varying the temperature and using light scattering to detect the phase splitting. In addition, the influence of small amounts of water on the demixing temperatures of binary mixtures of [C2MIM][NTf2] and propan-1-ol, butan-1-ol, and pentan-1-ol was investigated.
Resumo:
With the availability of a wide range of cloud Virtual Machines (VMs) it is difficult to determine which VMs can maximise the performance of an application. Benchmarking is commonly used to this end for capturing the performance of VMs. Most cloud benchmarking techniques are typically heavyweight - time consuming processes which have to benchmark the entire VM in order to obtain accurate benchmark data. Such benchmarks cannot be used in real-time on the cloud and incur extra costs even before an application is deployed.
In this paper, we present lightweight cloud benchmarking techniques that execute quickly and can be used in near real-time on the cloud. The exploration of lightweight benchmarking techniques are facilitated by the development of DocLite - Docker Container-based Lightweight Benchmarking. DocLite is built on the Docker container technology which allows a user-defined portion (such as memory size and the number of CPU cores) of the VM to be benchmarked. DocLite operates in two modes, in the first mode, containers are used to benchmark a small portion of the VM to generate performance ranks. In the second mode, historic benchmark data is used along with the first mode as a hybrid to generate VM ranks. The generated ranks are evaluated against three scientific high-performance computing applications. The proposed techniques are up to 91 times faster than a heavyweight technique which benchmarks the entire VM. It is observed that the first mode can generate ranks with over 90% and 86% accuracy for sequential and parallel execution of an application. The hybrid mode improves the correlation slightly but the first mode is sufficient for benchmarking cloud VMs.
Resumo:
Existing benchmarking methods are time consuming processes as they typically benchmark the entire Virtual Machine (VM) in order to generate accurate performance data, making them less suitable for real-time analytics. The research in this paper is aimed to surmount the above challenge by presenting DocLite - Docker Container-based Lightweight benchmarking tool. DocLite explores lightweight cloud benchmarking methods for rapidly executing benchmarks in near real-time. DocLite is built on the Docker container technology, which allows a user-defined memory size and number of CPU cores of the VM to be benchmarked. The tool incorporates two benchmarking methods - the first referred to as the native method employs containers to benchmark a small portion of the VM and generate performance ranks, and the second uses historic benchmark data along with the native method as a hybrid to generate VM ranks. The proposed methods are evaluated on three use-cases and are observed to be up to 91 times faster than benchmarking the entire VM. In both methods, small containers provide the same quality of rankings as a large container. The native method generates ranks with over 90% and 86% accuracy for sequential and parallel execution of an application compared against benchmarking the whole VM. The hybrid method did not improve the quality of the rankings significantly.