15 resultados para cloud droplet number concentration

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Charcoal analysis was conducted on sediment cores from three lakes to assess the relationship between the area and number of charcoal particles. Three charcoal-size parameters (maximum breadth, maximum length and area) were measured on sediment samples representing various vegetation types, including shrub tundra, boreal forest and temperate forest. These parameters and charcoal size-class distributions do not differ statistically between two sites where the same preparation technique (glycerine pollen slides) was used, but they differ for the same core when different techniques were applied. Results suggest that differences in charcoal size and size-class distribution are mainly caused by different preparation techniques and are not related to vegetation-type variation. At all three sites, the area and number concentrations of charcoal particles are highly correlated in standard pollen slides; 82–83% of the variability of the charcoal-area concentration can be explained by the particle-number concentration. Comparisons between predicted and measured area concentrations show that regression equations linking charcoal number and area concentrations can be used across sites as long as the same pollen-preparation technique is used. Thus it is concluded that it is unnecessary to measure charcoal areas in standard pollen slides – a time-consuming and tedious process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The scope of this work was to examine in vitro responses of lung cells to secondary organic aerosol (SOA) particles, under realistic ambient air and physiological conditions occurring when particles are inhaled by mammals, using a novel particle deposition chamber. The cell cultures included cell types that are representative for the inner surface of airways and alveoli and are the target cells for inhaled particles. The results demonstrate that an exposure to SOA at ambient-air concentrations of about 10(4) particles/cm(3) for 2 h leads to only moderate cellular responses. There is evidence for (i) cell type specific effects and for (ii) different effects of SOA originating from anthropogenic and biogenic precursors, i.e. 1,3,5-trimethylbenzene (TMB) and alpha-pinene, respectively. There was no indication for cytotoxic effects but for subtle changes in cellular functions that are essential for lung homeostasis. Decreased phagocytic activity was found in human macrophages exposed to SOA from alpha-pinene. Alveolar epithelial wound repair was affected by TMB-SOA exposure, mainly because of altered cell spreading and migration at the edge of the wound. In addition, cellular responses were found to correlate with particle number concentration, as interleukin-8 production was increased in pig explants exposed to TMB-SOA with high particle numbers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The intensive use of nano-sized titanium dioxide (TiO2) particles in many different applications necessitates studies on their risk assessment as there are still open questions on their safe handling and utilization. For reliable risk assessment, the interaction of TiO2 nanoparticles (NP) with biological systems ideally needs to be investigated using physico-chemically uniform and well-characterized NP. In this article, we describe the reproducible production of TiO2 NP aerosols using spark ignition technology. Because currently no data are available on inhaled NP in the 10–50 nm diameter range, the emphasis was to generate NP as small as 20 nm for inhalation studies in rodents. For anticipated in vivo dosimetry analyses, TiO2 NP were radiolabeled with 48V by proton irradiation of the titanium electrodes of the spark generator. The dissolution rate of the 48V label was about 1% within the first day. The highly concentrated, polydisperse TiO2 NP aerosol (3–6 × 106 cm−3) proved to be constant over several hours in terms of its count median mobility diameter, its geometric standard deviation, and number concentration. Extensive characterization of NP chemical composition, physical structure, morphology, and specific surface area was performed. The originally generated amorphous TiO2 NP were converted into crystalline anatase TiO2 NP by thermal annealing at 950 °C. Both crystalline and amorphous 20-nm TiO2 NP were chain agglomerated/aggregated, consisting of primary particles in the range of 5 nm. Disintegration of the deposited TiO2 NP in lung tissue was not detectable within 24 h.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The intensive use of nano-sized particles in many different applications necessitates studies on their risk assessment as there are still open questions on their safe handling and utilization. For reliable risk assessment, the interaction of nanoparticles (NP) with biological systems after various routes of exposure needs to be investigated using well-characterized NP. We report here on the generation of gold-NP (Au-NP) aerosols for inhalation studies with the spark ignition technique, and their characterization in terms of chemical composition, physical structure, morphology, and specific surface area, and on interaction with lung tissues and lung cells after 1 h inhalation by mice. The originally generated agglomerated Au-NP were converted into compact spherical Au-NP by thermal annealing at 600 °C, providing particles of similar mass, but different size and specific surface area. Since there are currently no translocation data available on inhaled Au-NP in the 10–50 nm diameter range, the emphasis was to generate NP as small as 20 nm for inhalation in rodents. For anticipated in vivo systemic translocation and dosimetry analyses, radiolabeled Au-NP were created by proton irradiating the gold electrodes of the spark generator, thus forming gamma ray emitting 195Au with 186 days half-life, allowing long-term biokinetic studies. The dissolution rate of 195Au from the NP was below detection limits. The highly concentrated, polydisperse Au-NP aerosol (1–2 × 107 NP/cm3) proved to be constant over several hours in terms of its count median mobility diameter, its geometric standard deviation and number concentration. After collection on filters particles can be re-suspended and used for instillation or ingestion studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Particulate matter (PM) pollution is a leading cause of premature death, particularly in those with pre-existing lung disease. A causative link between particle properties and adverse health effects remains unestablished mainly due to complex and variable physico-chemical PM parameters. Controlled laboratory experiments are required. Generating atmospherically realistic Aerosols and performing cell-exposure studies at relevant particle-doses are challenging. Here we examine gasoline-exhaust particle toxicity from a Euro-5 passenger car in a uniquely realistic exposure scenario, combining a smog chamber simulating atmospheric ageing, an aerosol enrichment System varying particle number concentration independent of particle chemistry, and an aerosol Deposition chamber physiologically delivering particles on air-liquid interface (ALI) cultures reproducing normal and susceptible health status. Gasoline-exhaust is an important PM source with largely unknown health effects. We investigated acute responses of fully-differentiated normal, distressed (antibiotics treated) normal, and cystic fibrosis human bronchial epithelia (HBE), and a proliferating, single-cell type bronchial epithelial cell-line (BEAS-2B). We show that a single, short-term exposure to realistic doses of atmospherically-aged gasoline-exhaust particles impairs epithelial key-defence mechanisms, rendering it more vulnerable to subsequent hazards. We establish dose-response curves at realistic particle-concentration levels. Significant differences between cell models suggest the use of fully differentiated HBE is most appropriate in future toxicity studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Serial quantification of BCR-ABL1 mRNA is an important therapeutic indicator in chronic myeloid leukaemia, but there is a substantial variation in results reported by different laboratories. To improve comparability, an internationally accepted plasmid certified reference material (CRM) was developed according to ISO Guide 34:2009. Fragments of BCR-ABL1 (e14a2 mRNA fusion), BCR and GUSB transcripts were amplified and cloned into pUC18 to yield plasmid pIRMM0099. Six different linearised plasmid solutions were produced with the following copy number concentrations, assigned by digital PCR, and expanded uncertainties: 1.08±0.13 × 10(6), 1.08±0.11 × 10(5), 1.03±0.10 × 10(4), 1.02±0.09 × 10(3), 1.04±0.10 × 10(2) and 10.0±1.5 copies/μl. The certification of the material for the number of specific DNA fragments per plasmid, copy number concentration of the plasmid solutions and the assessment of inter-unit heterogeneity and stability were performed according to ISO Guide 35:2006. Two suitability studies performed by 63 BCR-ABL1 testing laboratories demonstrated that this set of 6 plasmid CRMs can help to standardise a number of measured transcripts of e14a2 BCR-ABL1 and three control genes (ABL1, BCR and GUSB). The set of six plasmid CRMs is distributed worldwide by the Institute for Reference Materials and Measurements (Belgium) and its authorised distributors (https://ec.europa.eu/jrc/en/reference-materials/catalogue/; CRM code ERM-AD623a-f).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evolution of the Next Generation Networks, especially the wireless broadband access technologies such as Long Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMAX), have increased the number of "all-IP" networks across the world. The enhanced capabilities of these access networks has spearheaded the cloud computing paradigm, where the end-users aim at having the services accessible anytime and anywhere. The services availability is also related with the end-user device, where one of the major constraints is the battery lifetime. Therefore, it is necessary to assess and minimize the energy consumed by the end-user devices, given its significance for the user perceived quality of the cloud computing services. In this paper, an empirical methodology to measure network interfaces energy consumption is proposed. By employing this methodology, an experimental evaluation of energy consumption in three different cloud computing access scenarios (including WiMAX) were performed. The empirical results obtained show the impact of accurate network interface states management and application network level design in the energy consumption. Additionally, the achieved outcomes can be used in further software-based models to optimized energy consumption, and increase the Quality of Experience (QoE) perceived by the end-users.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Indoor radon is regularly measured in Switzerland. However, a nationwide model to predict residential radon levels has not been developed. The aim of this study was to develop a prediction model to assess indoor radon concentrations in Switzerland. The model was based on 44,631 measurements from the nationwide Swiss radon database collected between 1994 and 2004. Of these, 80% randomly selected measurements were used for model development and the remaining 20% for an independent model validation. A multivariable log-linear regression model was fitted and relevant predictors selected according to evidence from the literature, the adjusted R², the Akaike's information criterion (AIC), and the Bayesian information criterion (BIC). The prediction model was evaluated by calculating Spearman rank correlation between measured and predicted values. Additionally, the predicted values were categorised into three categories (50th, 50th-90th and 90th percentile) and compared with measured categories using a weighted Kappa statistic. The most relevant predictors for indoor radon levels were tectonic units and year of construction of the building, followed by soil texture, degree of urbanisation, floor of the building where the measurement was taken and housing type (P-values <0.001 for all). Mean predicted radon values (geometric mean) were 66 Bq/m³ (interquartile range 40-111 Bq/m³) in the lowest exposure category, 126 Bq/m³ (69-215 Bq/m³) in the medium category, and 219 Bq/m³ (108-427 Bq/m³) in the highest category. Spearman correlation between predictions and measurements was 0.45 (95%-CI: 0.44; 0.46) for the development dataset and 0.44 (95%-CI: 0.42; 0.46) for the validation dataset. Kappa coefficients were 0.31 for the development and 0.30 for the validation dataset, respectively. The model explained 20% overall variability (adjusted R²). In conclusion, this residential radon prediction model, based on a large number of measurements, was demonstrated to be robust through validation with an independent dataset. The model is appropriate for predicting radon level exposure of the Swiss population in epidemiological research. Nevertheless, some exposure misclassification and regression to the mean is unavoidable and should be taken into account in future applications of the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing environmental conditions and number of users, application performance might suffer, leading to Service Level Agreement (SLA) violations and inefficient use of hardware resources. We introduce a system for controlling the complexity of scaling applications composed of multiple services using mechanisms based on fulfillment of SLAs. We present how service monitoring information can be used in conjunction with service level objectives, predictions, and correlations between performance indicators for optimizing the allocation of services belonging to distributed applications. We validate our models using experiments and simulations involving a distributed enterprise information system. We show how discovering correlations between application performance indicators can be used as a basis for creating refined service level objectives, which can then be used for scaling the application and improving the overall application's performance under similar conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract We demonstrate the use of Fourier transform infrared spectroscopy (FTIRS) to make quantitative measures of total organic carbon (TOC), total inorganic carbon (TIC) and biogenic silica (BSi) concentrations in sediment. FTIRS is a fast and costeffective technique and only small sediment samples are needed (0.01 g). Statistically significant models were developed using sediment samples from northern Sweden and were applied to sediment records from Sweden, northeast Siberia and Macedonia. The correlation between FTIRS-inferred values and amounts of biogeochemical constituents assessed conventionally varied between r = 0.84–0.99 for TOC, r = 0.85– 0.99 for TIC, and r = 0.68–0.94 for BSi. Because FTIR spectra contain information on a large number of both inorganic and organic components, there is great potential for FTIRS to become an important tool in paleolimnology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Autophagy is a lysosomal bulk degradation pathway for cytoplasmic cargo, such as long-lived proteins, lipids, and organelles. Induced upon nutrient starvation, autophagic degradation is accomplished by the concerted actions of autophagy-related (ATG) proteins. Here we demonstrate that two ATGs, human Atg2A and Atg14L, colocalize at cytoplasmic lipid droplets (LDs) and are functionally involved in controlling the number and size of LDs in human tumor cell lines. We show that Atg2A is targeted to cytoplasmic ADRP-positive LDs that migrate bidirectionally along microtubules. The LD localization of Atg2A was found to be independent of the autophagic status. Further, Atg2A colocalized with Atg14L under nutrient-rich conditions when autophagy was not induced. Upon nutrient starvation and dependent on phosphatidylinositol 3-phosphate [PtdIns(3)P] generation, both Atg2A and Atg14L were also specifically targeted to endoplasmic reticulum-associated early autophagosomal membranes, marked by the PtdIns(3)P effectors double-FYVE containing protein 1 (DFCP1) and WD-repeat protein interacting with phosphoinositides 1 (WIPI-1), both of which function at the onset of autophagy. These data provide evidence for additional roles of Atg2A and Atg14L in the formation of early autophagosomal membranes and also in lipid metabolism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE This study assessed whether a cycle of "routine" therapeutic drug monitoring (TDM) for imatinib dosage individualization, targeting an imatinib trough plasma concentration (C min) of 1,000 ng/ml (tolerance: 750-1,500 ng/ml), could improve clinical outcomes in chronic myelogenous leukemia (CML) patients, compared with TDM use only in case of problems ("rescue" TDM). METHODS Imatinib concentration monitoring evaluation was a multicenter randomized controlled trial including adult patients in chronic or accelerated phase CML receiving imatinib since less than 5 years. Patients were allocated 1:1 to "routine TDM" or "rescue TDM." The primary endpoint was a combined outcome (failure- and toxicity-free survival with continuation on imatinib) over 1-year follow-up, analyzed in intention-to-treat (ISRCTN31181395). RESULTS Among 56 patients (55 evaluable), 14/27 (52 %) receiving "routine TDM" remained event-free versus 16/28 (57 %) "rescue TDM" controls (P = 0.69). In the "routine TDM" arm, dosage recommendations were correctly adopted in 14 patients (median C min: 895 ng/ml), who had fewer unfavorable events (28 %) than the 13 not receiving the advised dosage (77 %; P = 0.03; median C min: 648 ng/ml). CONCLUSIONS This first target concentration intervention trial could not formally demonstrate a benefit of "routine TDM" because of small patient number and surprisingly limited prescriber's adherence to dosage recommendations. Favorable outcomes were, however, found in patients actually elected for target dosing. This study thus shows first prospective indication for TDM being a useful tool to guide drug dosage and shift decisions. The study design and analysis provide an interesting paradigm for future randomized TDM trials on targeted anticancer agents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this article is to demonstrate the feasibility of on-demand creation of cloud-based elastic mobile core networks, along with their lifecycle management. For this purpose the article describes the key elements to realize the architectural vision of EPC as a Service, an implementation option of the Evolved Packet Core, as specified by 3GPP, which can be deployed in cloud environments. To meet several challenging requirements associated with the implementation of EPC over a cloud infrastructure and providing it “as a Service,” this article presents a number of different options, each with different characteristics, advantages, and disadvantages. A thorough analysis comparing the different implementation options is also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.