909 resultados para Multidimensional Scaling
Resumo:
Two important and upcoming technologies, microgrids and electricity generation from wind resources, are increasingly being combined. Various control strategies can be implemented, and droop control provides a simple option without requiring communication between microgrid components. Eliminating the single source of potential failure around the communication system is especially important in remote, islanded microgrids, which are considered in this work. However, traditional droop control does not allow the microgrid to utilize much of the power available from the wind. This dissertation presents a novel droop control strategy, which implements a droop surface in higher dimension than the traditional strategy. The droop control relationship then depends on two variables: the dc microgrid bus voltage, and the wind speed at the current time. An approach for optimizing this droop control surface in order to meet a given objective, for example utilizing all of the power available from a wind resource, is proposed and demonstrated. Various cases are used to test the proposed optimal high dimension droop control method, and demonstrate its function. First, the use of linear multidimensional droop control without optimization is demonstrated through simulation. Next, an optimal high dimension droop control surface is implemented with a simple dc microgrid containing two sources and one load. Various cases for changing load and wind speed are investigated using simulation and hardware-in-the-loop techniques. Optimal multidimensional droop control is demonstrated with a wind resource in a full dc microgrid example, containing an energy storage device as well as multiple sources and loads. Finally, the optimal high dimension droop control method is applied with a solar resource, and using a load model developed for a military patrol base application. The operation of the proposed control is again investigated using simulation and hardware-in-the-loop techniques.
Resumo:
We developed a gel- and label-free proteomics platform for comparative studies of human serum. The method involves the depletion of the six most abundant proteins, protein fractionation by Off-Gel IEF and RP-HPLC, followed by tryptic digestion, LC-MS/MS, protein identification, and relative quantification using probabilistic peptide match score summation (PMSS). We evaluated performance and reproducibility of the complete platform and the individual dimensions, by using chromatograms of the RP-HPLC runs, PMSS based abundance scores and abundance distributions as objective endpoints. We were interested if a relationship exists between the quantity ratio and the PMSS score ratio. The complete analysis was performed four times with two sets of serum samples containing different concentrations of spiked bovine beta-lactoglobulin (0.1 and 0.3%, w/w). The two concentrations resulted in significantly differing PMSS scores when compared to the variability in PMSS scores of all other protein identifications. We identified 196 proteins, of which 116 were identified four times in corresponding fractions whereof 73 qualified for relative quantification. Finally, we characterized the PMSS based protein abundance distributions with respect to the two dimensions of fractionation and discussed some interesting patterns representing discrete isoforms. We conclude that combination of Off-Gel electrophoresis (OGE) and HPLC is a reproducible protein fractionation technique, that PMSS is applicable for relative quantification, that the number of quantifiable proteins is always smaller than the number of identified proteins and that reproducibility of protein identifications should supplement probabilistic acceptance criteria.
Resumo:
Current advanced cloud infrastructure management solutions allow scheduling actions for dynamically changing the number of running virtual machines (VMs). This approach, however, does not guarantee that the scheduled number of VMs will properly handle the actual user generated workload, especially if the user utilization patterns will change. We propose using a dynamically generated scaling model for the VMs containing the services of the distributed applications, which is able to react to the variations in the number of application users. We answer the following question: How to dynamically decide how many services of each type are needed in order to handle a larger workload within the same time constraints? We describe a mechanism for dynamically composing the SLAs for controlling the scaling of distributed services by combining data analysis mechanisms with application benchmarking using multiple VM configurations. Based on processing of multiple application benchmarks generated data sets we discover a set of service monitoring metrics able to predict critical Service Level Agreement (SLA) parameters. By combining this set of predictor metrics with a heuristic for selecting the appropriate scaling-out paths for the services of distributed applications, we show how SLA scaling rules can be inferred and then used for controlling the runtime scale-in and scale-out of distributed services. We validate our architecture and models by performing scaling experiments with a distributed application representative for the enterprise class of information systems. We show how dynamically generated SLAs can be successfully used for controlling the management of distributed services scaling.
Resumo:
Modern cloud-based applications and infrastructures may include resources and services (components) from multiple cloud providers, are heterogeneous by nature and require adjustment, composition and integration. The specific application requirements can be met with difficulty by the current static predefined cloud integration architectures and models. In this paper, we propose the Intercloud Operations and Management Framework (ICOMF) as part of the more general Intercloud Architecture Framework (ICAF) that provides a basis for building and operating a dynamically manageable multi-provider cloud ecosystem. The proposed ICOMF enables dynamic resource composition and decomposition, with a main focus on translating business models and objectives to cloud services ensembles. Our model is user-centric and focuses on the specific application execution requirements, by leveraging incubating virtualization techniques. From a cloud provider perspective, the ecosystem provides more insight into how to best customize the offerings of virtualized resources.
Resumo:
Global investment in Sustainable Land Management (SLM) has been substantial, but knowledge gaps remain. Overviews of where land degradation (LD) is taking place and how land users are addressing the problem using SLM are still lacking for most individual countries and regions. Relevant maps focus more on LD than SLM, and they have been compiled using different methods. This makes it impossible to compare the benefits of SLM interventions and prevents informed decision-making on how best to invest in land. To fill this knowledge gap, a standardised mapping method has been collaboratively developed by the World Overview of Conservation Approaches and Technologies (WOCAT), FAO’s Land Degradation Assessment in Drylands (LADA) project, and the EU’s Mitigating Desertification and Remediating Degraded Land (DESIRE) project. The method generates information on the distribution and characteristics of LD and SLM activities and can be applied at the village, national, or regional level. It is based on participatory expert assessment, documents, and surveys. These data sources are spatially displayed across a land-use systems base map. By enabling mapping of the DPSIR framework (Driving Forces-Pressures-State-Impacts-Responses) for degradation and conservation, the method provides key information for decision-making. It may also be used to monitor LD and conservation following project implementation. This contribution explains the mapping method, highlighting findings made at different levels (national and local) in South Africa and the Mediterranean region. Keywords: Mapping, Decision Support, Land Degradation, Sustainable Land Management, Ecosystem Services, Participatory Expert Assessment
Resumo:
Clays and claystones are used as backfill and barrier materials in the design of waste repositories, because they act as hydraulic barriers and retain contaminants. Transport through such barriers occurs mainly by molecular diffusion. There is thus an interest to relate the diffusion properties of clays to their structural properties. In previous work, we have developed a concept for up-scaling pore-scale molecular diffusion coefficients using a grid-based model for the sample pore structure. Here we present an operational algorithm which can generate such model pore structures of polymineral materials. The obtained pore maps match the rock’s mineralogical components and its macroscopic properties such as porosity, grain and pore size distributions. Representative ensembles of grains in 2D or 3D are created by a lattice Monte Carlo (MC) method, which minimizes the interfacial energy of grains starting from an initial grain distribution. Pores are generated at grain boundaries and/or within grains. The method is general and allows to generate anisotropic structures with grains of approximately predetermined shapes, or with mixtures of different grain types. A specific focus of this study was on the simulation of clay-like materials. The generated clay pore maps were then used to derive upscaled effective diffusion coefficients for non-sorbing tracers using a homogenization technique. The large number of generated maps allowed to check the relations between micro-structural features of clays and their effective transport parameters, as is required to explain and extrapolate experimental diffusion results. As examples, we present a set of 2D and 3D simulations and investigated the effects of nanopores within particles (interlayer pores) and micropores between particles. Archie’s simple power law is followed in systems with only micropores. When nanopores are present, additional parameters are required; the data reveal that effective diffusion coefficients could be described by a sum of two power functions, related to the micro- and nanoporosity. We further used the model to investigate the relationships between particle orientation and effective transport properties of the sample.
Resumo:
In this study, we investigated the scaling relations between trabecular bone volume fraction (BV/TV) and parameters of the trabecular microstructure at different skeletal sites. Cylindrical bone samples with a diameter of 8mm were harvested from different skeletal sites of 154 human donors in vitro: 87 from the distal radius, 59/69 from the thoracic/lumbar spine, 51 from the femoral neck, and 83 from the greater trochanter. μCT images were obtained with an isotropic spatial resolution of 26μm. BV/TV and trabecular microstructure parameters (TbN, TbTh, TbSp, scaling indices (< > and σ of α and αz), and Minkowski Functionals (Surface, Curvature, Euler)) were computed for each sample. The regression coefficient β was determined for each skeletal site as the slope of a linear fit in the double-logarithmic representations of the correlations of BV/TV versus the respective microstructure parameter. Statistically significant correlation coefficients ranging from r=0.36 to r=0.97 were observed for BV/TV versus microstructure parameters, except for Curvature and Euler. The regression coefficients β were 0.19 to 0.23 (TbN), 0.21 to 0.30 (TbTh), −0.28 to −0.24 (TbSp), 0.58 to 0.71 (Surface) and 0.12 to 0.16 (<α>), 0.07 to 0.11 (<αz>), −0.44 to −0.30 (σ(α)), and −0.39 to −0.14 (σ(αz)) at the different skeletal sites. The 95% confidence intervals of β overlapped for almost all microstructure parameters at the different skeletal sites. The scaling relations were independent of vertebral fracture status and similar for subjects aged 60–69, 70–79, and >79years. In conclusion, the bone volume fraction–microstructure scaling relations showed a rather universal character.
Resumo:
Spatial scaling is an integral aspect of many spatial tasks that involve symbol-to-referent correspondences (e.g., map reading, drawing). In this study, we asked 3–6-year-olds and adults to locate objects in a two-dimensional spatial layout using information from a second spatial representation (map). We examined how scaling factor and reference features, such as the shape of the layout or the presence of landmarks, affect performance. Results showed that spatial scaling on this simple task undergoes considerable development, especially between 3 and 5 years of age. Furthermore, the youngest children showed large individual variability and profited from landmark information. Accuracy differed between scaled and un-scaled items, but not between items using different scaling factors (1:2 vs. 1:4), suggesting that participants encoded relative rather than absolute distances.
Resumo:
The session aims at analyzing efforts in up-scaling cleaner and more efficient energy solutions for poor people in developing countries by addressing the following questions: What are factors along the whole value chain and in the institutional, social, but also environmental space that enable up-scaling of improved pro-poor technologies? Are there differences between energy carriers or in different contexts? What are most promising entry points for up-scaling?