820 resultados para OPHIUCHI CLOUD CORE
Resumo:
Aims. The CMa R1 star-forming region contains several compact clusters as well as many young early-B stars. It is associated with a well-known bright rimmed nebula, the nature of which is unclear (fossil HII region or supernova remnant). To help elucidate the nature of the nebula, our goal was to reconstruct the star-formation history of the CMa R1 region, including the previously unknown older, fainter low-mass stellar population, using X-rays. Methods. We analyzed images obtained with the ROSAT satellite, covering similar to 5 sq. deg. Complementary VRI photometry was performed with the Gemini South telescope. Colour-magnitude and colour-colour diagrams were used in conjunction with pre-main sequence evolutionary tracks to derive the masses and ages of the X-ray sources. Results. The ROSAT images show two distinct clusters. One is associated with the known optical clusters near Z CMa, to which similar to 40 members are added. The other, which we name the ""GU CMa"" cluster, is new, and contains similar to 60 members. The ROSAT sources are young stars with masses down to M(star) similar to 0.5 M(circle dot), and ages up to 10 Myr. The mass functions of the two clusters are similar, but the GU CMa cluster is older than the cluster around Z CMa by at least a few Myr. Also, the GU CMa cluster is away from any molecular cloud, implying that star formation must have ceased; on the contrary (as already known), star formation is very active in the Z CMa region.
Resumo:
The objective of this article is to demonstrate the feasibility of on-demand creation of cloud-based elastic mobile core networks, along with their lifecycle management. For this purpose the article describes the key elements to realize the architectural vision of EPC as a Service, an implementation option of the Evolved Packet Core, as specified by 3GPP, which can be deployed in cloud environments. To meet several challenging requirements associated with the implementation of EPC over a cloud infrastructure and providing it “as a Service,” this article presents a number of different options, each with different characteristics, advantages, and disadvantages. A thorough analysis comparing the different implementation options is also presented.
Resumo:
The MAP-i Doctoral Program of the Universities of Minho, Aveiro and Porto.
Resumo:
Nykyaikaiset pilvipalvelut tarjoavat suurille yrityksille mahdollisuuden tehostaa laskennallista tietojenkäsittelyä. Pilvipalveluiden käyttöönotto tuo mukanaan kuitenkin esimerkiksi useita tietoturvakysymyksiä, joiden vuoksi käyttöönoton tulee olla tarkasti suunniteltua. Tämä tutkimus esittelee kirjallisuuskatsaukseen perustuvan, asteittaisen suunnitelman pilvipalveluiden käyttöönotolle energialiiketoimintaympäristössä. Kohdeyrityksen sisäiset haastattelut ja katsaus nykyisiin energiateollisuuden pilviratkaisuihin muodostavat kokonaiskuvan käyttöönoton haasteista ja mahdollisuuksista. Tutkimuksen päätavoitteena on esittää ratkaisut tyypillisiin pilvipalvelun käyttöönotossa esiintyviin ongelmiin käyttöönottomallin avulla. Tutkimuksessa rakennettu käyttöönottomalli testattiin esimerkkitapauksen avulla ja malli todettiin toimivaksi. Ulkoisten palveluiden herättämien tietoturvakysymysten takia käyttöönoton ensimmäiset osiot, kuten lopputuotteen määrittely ja huolellinen suunnittelu, ovat koko käyttöönottoprosessin ydin. Lisäksi pilvipalveluiden käyttöönotto vaatii nykyiseltä käyttöympäristöltä uusia teknisiä ja hallinnollisia taitoja. Tutkimuksen tulokset osoittavat pilvipalveluiden monipuolisen hyödyn erityisesti laskentatehon tarpeen vaihdellessa. Käyttöönottomallin rinnalle luotu kustannusvertailu tukee kirjallisuuskatsauksessa esille tuotuja hyötyjä ja tarjoaa kohdeyritykselle perusteen tutkimuksen eteenpäin viemiselle.
Resumo:
Manufacturing industry has been always facing challenge to improve the production efficiency, product quality, innovation ability and struggling to adopt cost-effective manufacturing system. In recent years cloud computing is emerging as one of the major enablers for the manufacturing industry. Combining the emerged cloud computing and other advanced manufacturing technologies such as Internet of Things, service-oriented architecture (SOA), networked manufacturing (NM) and manufacturing grid (MGrid), with existing manufacturing models and enterprise information technologies, a new paradigm called cloud manufacturing is proposed by the recent literature. This study presents concepts and ideas of cloud computing and cloud manufacturing. The concept, architecture, core enabling technologies, and typical characteristics of cloud manufacturing are discussed, as well as the difference and relationship between cloud computing and cloud manufacturing. The research is based on mixed qualitative and quantitative methods, and a case study. The case is a prototype of cloud manufacturing solution, which is software platform cooperated by ATR Soft Oy and SW Company China office. This study tries to understand the practical impacts and challenges that are derived from cloud manufacturing. The main conclusion of this study is that cloud manufacturing is an approach to achieve the transformation from traditional production-oriented manufacturing to next generation service-oriented manufacturing. Many manufacturing enterprises are already using a form of cloud computing in their existing network infrastructure to increase flexibility of its supply chain, reduce resources consumption, the study finds out the shift from cloud computing to cloud manufacturing is feasible. Meanwhile, the study points out the related theory, methodology and application of cloud manufacturing system are far from maturity, it is still an open field where many new technologies need to be studied.
Resumo:
This study focuses on the occurrence and type of clouds observed in West Africa, a subject which has neither been much documented nor quantified. It takes advantage of data collected above Niamey in 2006 with the ARM mobile facility. A survey of cloud characteristics inferred from ground measurements is presented with a focus on their seasonal evolution and diurnal cycle. Four types of clouds are distinguished: high-level clouds, deep convective clouds, shallow convective clouds and mid-level clouds. A frequent occurrence of the latter clouds located at the top of the Saharan Air Layer is highlighted. High-level clouds are ubiquitous throughout the period whereas shallow convective clouds are mainly noticeable during the core of the monsoon. The diurnal cycle of each cloud category and its seasonal evolution is investigated. CloudSat and CALIPSO data are used in order to demonstrate that these four cloud types (in addition to stratocumulus clouds over the ocean) are not a particularity of the Niamey region and that mid-level clouds are present over the Sahara during most of the Monsoon season. Moreover, using complementary data sets, the radiative impact of each type of clouds at the surface level has been quantified in the shortwave and longwave domain. Mid-level clouds and anvil clouds have the largest impact respectively in longwave (about 15 W m−2) and the shortwave (about 150 W m−2). Furthermore, mid-level clouds exert a strong radiative forcing in Spring at a time when the other cloud types are less numerous.
Resumo:
The simulated annealing approach to crystal structure determination from powder diffraction data, as implemented in the DASH program, is readily amenable to parallelization at the individual run level. Very large scale increases in speed of execution can be achieved by distributing individual DASH runs over a network of computers. The CDASH program delivers this by using scalable on-demand computing clusters built on the Amazon Elastic Compute Cloud service. By way of example, a 360 vCPU cluster returned the crystal structure of racemic ornidazole (Z0 = 3, 30 degrees of freedom) ca 40 times faster than a typical modern quad-core desktop CPU. Whilst used here specifically for DASH, this approach is of general applicability to other packages that are amenable to coarse-grained parallelism strategies.
Resumo:
The aim of this work was to study the dense cloud structures and to obtain the mass distribution of the dense cores (CMF) within the NGC6357 complex, from observations of the dust continuum at 450 and 850~$\mu$m of a 30 $\times$ 30 arcmin$^2$ region containing the H\textsc{ii} regions, G353.2+0.9 and G353.1+0.6.
Resumo:
Content Distribution Networks are mandatory components of modern web architectures, with plenty of vendors offering their services. Despite its maturity, new paradigms and architecture models are still being developed in this area. Cloud Computing, on the other hand, is a more recent concept which has expanded extremely quickly, with new services being regularly added to cloud management software suites such as OpenStack. The main contribution of this paper is the architecture and the development of an open source CDN that can be provisioned in an on-demand, pay-as-you-go model thereby enabling the CDN as a Service paradigm. We describe our experience with integration of CDNaaS framework in a cloud environment, as a service for enterprise users. We emphasize the flexibility and elasticity of such a model, with each CDN instance being delivered on-demand and associated to personalized caching policies as well as an optimized choice of Points of Presence based on exact requirements of an enterprise customer. Our development is based on the framework developed in the Mobile Cloud Networking EU FP7 project, which offers its enterprise users a common framework to instantiate and control services. CDNaaS is one of the core support components in this project as is tasked to deliver different type of multimedia content to several thousands of users geographically distributed. It integrates seamlessly in the MCN service life-cycle and as such enjoys all benefits of a common design environment, allowing for an improved interoperability with the rest of the services within the MCN ecosystem.
Resumo:
We analyzed the pollen content of a marine core located near the bay of Guayaquil in Ecuador to document the link between sea surface temperatures (SST) and changes in rainfall regimes on the adjacent continent during the Holocene. Based on the expansion/regression of five vegetation types, we observe three successive climatic patterns. In the first phase, between 11,700 and 7700 cal yr BP, the presence of a cloud (Andean) forest in the mid altitudes and mangroves in the estuary of the Guayas Basin, were associated with a maximum in boreal summer insolation, a northernmost position of the Intertropical Convergence Zone (ITCZ), a land- sea thermal contrast, and dryness. Between 7700 and 2850 cal yr BP, the expansion of the coastal herbs and the regression of the mangrove indicate a drier climate with weak ITCZ and low ENSO variability while austral winter insolation gradually increased. The interval between 4200 and 2850 cal yr BP was marked by the coolest and driest climatic conditions of the Holocene due to the weak influence of the ITCZ and a strengthening of the Humboldt Current. After 2850 cal yr BP, high variability and amplitude of the Andean forest changes occurred when ENSO frequency and amplitude increased, indicating high variability in land-sea connections. The ITCZ reached the latitude of Guayaquil only after 2500 cal yr BP inducing the bimodal precipitation regime we observe today. Our study shows that besides insolation, the ITCZ position and ENSO frequency, changes in eastern equatorial Pacific SSTs play a major role in determining the composition of the ecosystems and the hydrological cycle of the Ecuadorian Pacific coast and the Western Cordillera in Ecuador.
Resumo:
The goal of the W3C's Media Annotation Working Group (MAWG) is to promote interoperability between multimedia metadata formats on the Web. As experienced by everybody, audiovisual data is omnipresent on today's Web. However, different interaction interfaces and especially diverse metadata formats prevent unified search, access, and navigation. MAWG has addressed this issue by developing an interlingua ontology and an associated API. This article discusses the rationale and core concepts of the ontology and API for media resources. The specifications developed by MAWG enable interoperable contextualized and semantic annotation and search, independent of the source metadata format, and connecting multimedia data to the Linked Data cloud. Some demonstrators of such applications are also presented in this article.
Resumo:
The enormous potential of cloud computing for improved and cost-effective service has generated unprecedented interest in its adoption. However, a potential cloud user faces numerous risks regarding service requirements, cost implications of failure and uncertainty about cloud providers' ability to meet service level agreements. These risks hinder the adoption of cloud. We extend the work on goal-oriented requirements engineering (GORE) and obstacles for informing the adoption process. We argue that obstacles prioritisation and their resolution is core to mitigating risks in the adoption process. We propose a novel systematic method for prioritising obstacles and their resolution tactics using Analytical Hierarchy Process (AHP). We provide an example to demonstrate the applicability and effectiveness of the approach. To assess the AHP choice of the resolution tactics we support the method by stability and sensitivity analysis. Copyright 2014 ACM.
Resumo:
Individuals and corporate users are persistently considering cloud adoption due to its significant benefits compared to traditional computing environments. The data and applications in the cloud are stored in an environment that is separated, managed and maintained externally to the organisation. Therefore, it is essential for cloud providers to demonstrate and implement adequate security practices to protect the data and processes put under their stewardship. Security transparency in the cloud is likely to become the core theme that underpins the systematic disclosure of security designs and practices that enhance customer confidence in using cloud service and deployment models. In this paper, we present a framework that enables a detailed analysis of security transparency for cloud based systems. In particular, we consider security transparency from three different levels of abstraction, i.e., conceptual, organisation and technical levels, and identify the relevant concepts within these levels. This allows us to provide an elaboration of the essential concepts at the core of transparency and analyse the means for implementing them from a technical perspective. Finally, an example from a real world migration context is given to provide a solid discussion on the applicability of the proposed framework.
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
Part 5: Service Orientation in Collaborative Networks