981 resultados para cloud droplet number concentration


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evolution of the Next Generation Networks, especially the wireless broadband access technologies such as Long Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMAX), have increased the number of "all-IP" networks across the world. The enhanced capabilities of these access networks has spearheaded the cloud computing paradigm, where the end-users aim at having the services accessible anytime and anywhere. The services availability is also related with the end-user device, where one of the major constraints is the battery lifetime. Therefore, it is necessary to assess and minimize the energy consumed by the end-user devices, given its significance for the user perceived quality of the cloud computing services. In this paper, an empirical methodology to measure network interfaces energy consumption is proposed. By employing this methodology, an experimental evaluation of energy consumption in three different cloud computing access scenarios (including WiMAX) were performed. The empirical results obtained show the impact of accurate network interface states management and application network level design in the energy consumption. Additionally, the achieved outcomes can be used in further software-based models to optimized energy consumption, and increase the Quality of Experience (QoE) perceived by the end-users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the nationwide Swiss radon database collected between 1994 and 2004. Of these, 80% randomly selected measurements were used for model development and the remaining 20% for an independent model validation. A multivariable log-linear regression model was fitted and relevant predictors selected according to evidence from the literature, the adjusted R², the Akaike's information criterion (AIC), and the Bayesian information criterion (BIC). The prediction model was evaluated by calculating Spearman rank correlation between measured and predicted values. Additionally, the predicted values were categorised into three categories (50th, 50th-90th and 90th percentile) and compared with measured categories using a weighted Kappa statistic. The most relevant predictors for indoor radon levels were tectonic units and year of construction of the building, followed by soil texture, degree of urbanisation, floor of the building where the measurement was taken and housing type (P-values <0.001 for all). Mean predicted radon values (geometric mean) were 66 Bq/m³ (interquartile range 40-111 Bq/m³) in the lowest exposure category, 126 Bq/m³ (69-215 Bq/m³) in the medium category, and 219 Bq/m³ (108-427 Bq/m³) in the highest category. Spearman correlation between predictions and measurements was 0.45 (95%-CI: 0.44; 0.46) for the development dataset and 0.44 (95%-CI: 0.42; 0.46) for the validation dataset. Kappa coefficients were 0.31 for the development and 0.30 for the validation dataset, respectively. The model explained 20% overall variability (adjusted R²). In conclusion, this residential radon prediction model, based on a large number of measurements, was demonstrated to be robust through validation with an independent dataset. The model is appropriate for predicting radon level exposure of the Swiss population in epidemiological research. Nevertheless, some exposure misclassification and regression to the mean is unavoidable and should be taken into account in future applications of the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Jing Ltd. miniature combustion aerosol standard (Mini-CAST) soot generator is a portable, commercially available burner that is widely used for laboratory measurements of soot processes. While many studies have used the Mini-CAST to generate soot with known size, concentration, and organic carbon fraction under a single or few conditions, there has been no systematic study of the burner operation over a wide range of operating conditions. Here, we present a comprehensive characterization of the microphysical, chemical, morphological, and hygroscopic properties of Mini-CAST soot over the full range of oxidation air and mixing N-2 flow rates. Very fuel-rich and fuel-lean flame conditions are found to produce organic-dominated soot with mode diameters of 10-60nm, and the highest particle number concentrations are produced under fuel-rich conditions. The lowest organic fraction and largest diameter soot (70-130nm) occur under slightly fuel-lean conditions. Moving from fuel-rich to fuel-lean conditions also increases the O:C ratio of the soot coatings from similar to 0.05 to similar to 0.25, which causes a small fraction of the particles to act as cloud condensation nuclei near the Kelvin limit (kappa similar to 0-10(-3)). Comparison of these property ranges to those reported in the literature for aircraft and diesel engine soots indicates that the Mini-CAST soot is similar to real-world primary soot particles, which lends itself to a variety of process-based soot studies. The trends in soot properties uncovered here will guide selection of burner operating conditions to achieve optimum soot properties that are most relevant to such studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Satellite measurement validations, climate models, atmospheric radiative transfer models and cloud models, all depend on accurate measurements of cloud particle size distributions, number densities, spatial distributions, and other parameters relevant to cloud microphysical processes. And many airborne instruments designed to measure size distributions and concentrations of cloud particles have large uncertainties in measuring number densities and size distributions of small ice crystals. HOLODEC (Holographic Detector for Clouds) is a new instrument that does not have many of these uncertainties and makes possible measurements that other probes have never made. The advantages of HOLODEC are inherent to the holographic method. In this dissertation, I describe HOLODEC, its in-situ measurements of cloud particles, and the results of its test flights. I present a hologram reconstruction algorithm that has a sample spacing that does not vary with reconstruction distance. This reconstruction algorithm accurately reconstructs the field to all distances inside a typical holographic measurement volume as proven by comparison with analytical solutions to the Huygens-Fresnel diffraction integral. It is fast to compute, and has diffraction limited resolution. Further, described herein is an algorithm that can find the position along the optical axis of small particles as well as large complex-shaped particles. I explain an implementation of these algorithms that is an efficient, robust, automated program that allows us to process holograms on a computer cluster in a reasonable time. I show size distributions and number densities of cloud particles, and show that they are within the uncertainty of independent measurements made with another measurement method. The feasibility of another cloud particle instrument that has advantages over new standard instruments is proven. These advantages include a unique ability to detect shattered particles using three-dimensional positions, and a sample volume size that does not vary with particle size or airspeed. It also is able to yield two-dimensional particle profiles using the same measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding clouds and their role in climate depends in part on our ability to understand how individual cloud particles respond to environmental conditions. Keeping this objective in mind, a quadrupole trap with thermodynamic control has been designed and constructed in order to create an environment conducive to studying clouds in the laboratory. The quadrupole trap allows a single cloud particle to be suspended for long times. The temperature and water vapor saturation ratio near the trapped particle is controlled by the flow of saturated air through a tube with a discontinuous wall temperature. The design has the unique aspect that the quadrupole electrodes are submerged in heat transfer fluid, completely isolated from the cylindrical levitation volume. This fluid is used in the thermodynamic system to cool the chamber to realistic cloud temperatures, and a heated section of the tube provides for the temperature discontinuity. Thus far, charged water droplets, ranging from about 30-70 microns in diameter have been levitated. In addition, the thermodynamic system has been shown to create the necessary thermal conditions that will create supersaturated conditions in subsequent experiments. These advances will help lead to the next generation of ice nucleation experiments, moving from hemispherical droplets on a substrate to a spherical droplet that is not in contact with any surface.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Almost all regions of the brain receive one or more neuromodulatory inputs, and disrupting these inputs produces deficits in neuronal function. Neuromodulators act through intracellular second messenger pathways to influence the electrical properties of neurons, integration of synaptic inputs, spatio-temporal firing dynamics of neuronal networks, and, ultimately, systems behavior. Second messengers pathways consist of series of bimolecular reactions, enzymatic reactions, and diffusion. Calcium is the second messenger molecule with the most effectors, and thus is highly regulated by buffers, pumps and intracellular stores. Computational modeling provides an innovative, yet practical method to evaluate the spatial extent, time course and interaction among second messenger pathways, and the interaction of second messengers with neuron electrical properties. These processes occur both in compartments where the number of molecules are large enough to describe reactions deterministically (e.g. cell body), and in compartments where the number of molecules is small enough that reactions occur stochastically (e.g. spines). – In this tutorial, I explain how to develop models of second messenger pathways and calcium dynamics. The first part of the tutorial explains the equations used to model bimolecular reactions, enzyme reactions, calcium release channels, calcium pumps and diffusion. The second part explains some of the GENESIS, Kinetikit and Chemesis objects that implement the appropriate equations. In depth explanation of calcium and second messenger models is provided by reviewing code, both in XPP, Chemesis and Kinetikit, that implements simple models of calcium dynamics and second messenger cascades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of the Internet has made it possible to transfer data ‘around the globe at the click of a mouse’. Especially fresh business models such as cloud computing, the newest driver to illustrate the speed and breadth of the online environment, allow this data to be processed across national borders on a routine basis. A number of factors cause the Internet to blur the lines between public and private space: Firstly, globalization and the outsourcing of economic actors entrain an ever-growing exchange of personal data. Secondly, the security pressure in the name of the legitimate fight against terrorism opens the access to a significant amount of data for an increasing number of public authorities.And finally,the tools of the digital society accompany everyone at each stage of life by leaving permanent individual and borderless traces in both space and time. Therefore, calls from both the public and private sectors for an international legal framework for privacy and data protection have become louder. Companies such as Google and Facebook have also come under continuous pressure from governments and citizens to reform the use of data. Thus, Google was not alone in calling for the creation of ‘global privacystandards’. Efforts are underway to review established privacy foundation documents. There are similar efforts to look at standards in global approaches to privacy and data protection. The last remarkable steps were the Montreux Declaration, in which the privacycommissioners appealed to the United Nations ‘to prepare a binding legal instrument which clearly sets out in detail the rights to data protection and privacy as enforceable human rights’. This appeal was repeated in 2008 at the 30thinternational conference held in Strasbourg, at the 31stconference 2009 in Madrid and in 2010 at the 32ndconference in Jerusalem. In a globalized world, free data flow has become an everyday need. Thus, the aim of global harmonization should be that it doesn’t make any difference for data users or data subjects whether data processing takes place in one or in several countries. Concern has been expressed that data users might seek to avoid privacy controls by moving their operations to countries which have lower standards in their privacy laws or no such laws at all. To control that risk, some countries have implemented special controls into their domestic law. Again, such controls may interfere with the need for free international data flow. A formula has to be found to make sure that privacy at the international level does not prejudice this principle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing environmental conditions and number of users, application performance might suffer, leading to Service Level Agreement (SLA) violations and inefficient use of hardware resources. We introduce a system for controlling the complexity of scaling applications composed of multiple services using mechanisms based on fulfillment of SLAs. We present how service monitoring information can be used in conjunction with service level objectives, predictions, and correlations between performance indicators for optimizing the allocation of services belonging to distributed applications. We validate our models using experiments and simulations involving a distributed enterprise information system. We show how discovering correlations between application performance indicators can be used as a basis for creating refined service level objectives, which can then be used for scaling the application and improving the overall application's performance under similar conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract We demonstrate the use of Fourier transform infrared spectroscopy (FTIRS) to make quantitative measures of total organic carbon (TOC), total inorganic carbon (TIC) and biogenic silica (BSi) concentrations in sediment. FTIRS is a fast and costeffective technique and only small sediment samples are needed (0.01 g). Statistically significant models were developed using sediment samples from northern Sweden and were applied to sediment records from Sweden, northeast Siberia and Macedonia. The correlation between FTIRS-inferred values and amounts of biogeochemical constituents assessed conventionally varied between r = 0.84–0.99 for TOC, r = 0.85– 0.99 for TIC, and r = 0.68–0.94 for BSi. Because FTIR spectra contain information on a large number of both inorganic and organic components, there is great potential for FTIRS to become an important tool in paleolimnology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previously reported androgen receptor concentrations in rat testis and testicular cell types have varied widely. In the studies reported here a nuclear exchange assay was established in rat testis in which exchange after 86 hours at 4$\sp\circ$C was greater than 85% complete and receptor was stable. Receptor concentration per DNA measured by exchange declined between 15 and 25 days of age in the rat testis, then increased 4-fold during sexual maturation. Proliferation of germ cells which had low receptor concentration appeared to account for the early decline in testicular receptor concentration, whereas increase in receptor number per Sertoli cell between 25 and 35 days of age contributed to the later increase. Increase in Leydig cell number during maturation appeared to account for the remainder of the increase due to the high receptor concentration in these cells. Detailed studies showed that other possible explanations for changes in receptor number (e.g. shifts in receptor concentration between the cytosol and nuclear subcellular compartments or changes in the affinity of the receptor for its ligands) were not likely.^ Androgen receptor dynamics in testicular cells showed rapid, specific uptake of ($\sp3$H) -testosterone that was easily blocked by unlabeled testosterone (RA of 7 nM in both cell types), and medroxyprogesterone acetate (RA of 28 and 16 nM in Sertoli and peritubular cells, respectively), but not as well by the anti-androgens cyproterone acetate (RA of 116 and 68 nM) and hydroxyflutamide (RA of 300 and 180 nM). The affinity of the receptor for the ligand dimethylnortestosterone was similar in the two cell types (K$\rm\sb{d}$ values of 0.78 and 0.71 nM for Sertoli and peritubular cells) and was virtually identical with the affinity of the whole testis receptor (0.89 nM). Medroxyprogesterone acetate and testosterone significantly increased nuclear androgen receptor concentration relative to untreated controls in Sertoli and peritubular cells, whereas hydroxyflutamide and cyproterone acetate did not. Despite the different embryological origins of peritubular and Sertoli cells, their responses to both androgens and anti-androgens were similar. In addition, these studies suggest that peritubular cells are as likely as Sertoli cells to be primary androgen targets. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Environmental data sets of pollutant concentrations in air, water, and soil frequently include unquantified sample values reported only as being below the analytical method detection limit. These values, referred to as censored values, should be considered in the estimation of distribution parameters as each represents some value of pollutant concentration between zero and the detection limit. Most of the currently accepted methods for estimating the population parameters of environmental data sets containing censored values rely upon the assumption of an underlying normal (or transformed normal) distribution. This assumption can result in unacceptable levels of error in parameter estimation due to the unbounded left tail of the normal distribution. With the beta distribution, which is bounded by the same range of a distribution of concentrations, $\rm\lbrack0\le x\le1\rbrack,$ parameter estimation errors resulting from improper distribution bounds are avoided. This work developed a method that uses the beta distribution to estimate population parameters from censored environmental data sets and evaluated its performance in comparison to currently accepted methods that rely upon an underlying normal (or transformed normal) distribution. Data sets were generated assuming typical values encountered in environmental pollutant evaluation for mean, standard deviation, and number of variates. For each set of model values, data sets were generated assuming that the data was distributed either normally, lognormally, or according to a beta distribution. For varying levels of censoring, two established methods of parameter estimation, regression on normal ordered statistics, and regression on lognormal ordered statistics, were used to estimate the known mean and standard deviation of each data set. The method developed for this study, employing a beta distribution assumption, was also used to estimate parameters and the relative accuracy of all three methods were compared. For data sets of all three distribution types, and for censoring levels up to 50%, the performance of the new method equaled, if not exceeded, the performance of the two established methods. Because of its robustness in parameter estimation regardless of distribution type or censoring level, the method employing the beta distribution should be considered for full development in estimating parameters for censored environmental data sets. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Autophagy is a lysosomal bulk degradation pathway for cytoplasmic cargo, such as long-lived proteins, lipids, and organelles. Induced upon nutrient starvation, autophagic degradation is accomplished by the concerted actions of autophagy-related (ATG) proteins. Here we demonstrate that two ATGs, human Atg2A and Atg14L, colocalize at cytoplasmic lipid droplets (LDs) and are functionally involved in controlling the number and size of LDs in human tumor cell lines. We show that Atg2A is targeted to cytoplasmic ADRP-positive LDs that migrate bidirectionally along microtubules. The LD localization of Atg2A was found to be independent of the autophagic status. Further, Atg2A colocalized with Atg14L under nutrient-rich conditions when autophagy was not induced. Upon nutrient starvation and dependent on phosphatidylinositol 3-phosphate [PtdIns(3)P] generation, both Atg2A and Atg14L were also specifically targeted to endoplasmic reticulum-associated early autophagosomal membranes, marked by the PtdIns(3)P effectors double-FYVE containing protein 1 (DFCP1) and WD-repeat protein interacting with phosphoinositides 1 (WIPI-1), both of which function at the onset of autophagy. These data provide evidence for additional roles of Atg2A and Atg14L in the formation of early autophagosomal membranes and also in lipid metabolism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE This study assessed whether a cycle of "routine" therapeutic drug monitoring (TDM) for imatinib dosage individualization, targeting an imatinib trough plasma concentration (C min) of 1,000 ng/ml (tolerance: 750-1,500 ng/ml), could improve clinical outcomes in chronic myelogenous leukemia (CML) patients, compared with TDM use only in case of problems ("rescue" TDM). METHODS Imatinib concentration monitoring evaluation was a multicenter randomized controlled trial including adult patients in chronic or accelerated phase CML receiving imatinib since less than 5 years. Patients were allocated 1:1 to "routine TDM" or "rescue TDM." The primary endpoint was a combined outcome (failure- and toxicity-free survival with continuation on imatinib) over 1-year follow-up, analyzed in intention-to-treat (ISRCTN31181395). RESULTS Among 56 patients (55 evaluable), 14/27 (52 %) receiving "routine TDM" remained event-free versus 16/28 (57 %) "rescue TDM" controls (P = 0.69). In the "routine TDM" arm, dosage recommendations were correctly adopted in 14 patients (median C min: 895 ng/ml), who had fewer unfavorable events (28 %) than the 13 not receiving the advised dosage (77 %; P = 0.03; median C min: 648 ng/ml). CONCLUSIONS This first target concentration intervention trial could not formally demonstrate a benefit of "routine TDM" because of small patient number and surprisingly limited prescriber's adherence to dosage recommendations. Favorable outcomes were, however, found in patients actually elected for target dosing. This study thus shows first prospective indication for TDM being a useful tool to guide drug dosage and shift decisions. The study design and analysis provide an interesting paradigm for future randomized TDM trials on targeted anticancer agents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this article is to demonstrate the feasibility of on-demand creation of cloud-based elastic mobile core networks, along with their lifecycle management. For this purpose the article describes the key elements to realize the architectural vision of EPC as a Service, an implementation option of the Evolved Packet Core, as specified by 3GPP, which can be deployed in cloud environments. To meet several challenging requirements associated with the implementation of EPC over a cloud infrastructure and providing it “as a Service,” this article presents a number of different options, each with different characteristics, advantages, and disadvantages. A thorough analysis comparing the different implementation options is also presented.