953 resultados para Distribution system optimization
Resumo:
Although conventional sediment parameters (mean grain size, sorting, and skewness) and provenance have typically been used to infer sediment transport pathways, most freshwater, brackish, and marine environments are also characterized by abundant sediment constituents of biological, and possibly anthropogenic and volcanic, origin that can provide additional insight into local sedimentary processes. The biota will be spatially distributed according to its response to environmental parameters such as water temperature, salinity, dissolved oxygen, organic carbon content, grain size, and intensity of currents and tidal flow, whereas the presence of anthropogenic and volcanic constituents will reflect proximity to source areas and whether they are fluvially- or aerially-transported. Because each of these constituents have a unique environmental signature, they are a more precise proxy for that source area than the conventional sedimentary process indicators. This San Francisco Bay Coastal System study demonstrates that by applying a multi-proxy approach, the primary sites of sediment transport can be identified. Many of these sites are far from where the constituents originated, showing that sediment transport is widespread in the region. Although not often used, identifying and interpreting the distribution of naturally-occurring and allochthonous biologic, anthropogenic, and volcanic sediment constituents is a powerful tool to aid in the investigation of sediment transport pathways in other coastal systems.
Resumo:
To obtain insight in the relationship between the spatial distribution of organic-walled dinoflagellate cysts (dinocysts) and local environmental conditions, fifty-eight surface sediment samples from the coastal shelf off SW Africa were investigated on their dinocyst content with special focus on the two main river systems and the active upwelling that characterise this region. To avoid possible overprint by species-selective preservation, samples have been selected mainly from shelf sites where high sedimentation rates and/or low bottom water oxygen concentrations prevail. Multivariate ordination analyses have been carried out to investigate the relationship between the distribution patterns of individual species to environmental parameters of the upper water column and sediment transport processes. The main oceanographical variables at the surface (temperature, salinity, nutrients chlorophyll-a) in the region show onshore-offshore gradients. This pattern is reflected in the dinocyst associations with high relative abundances of heterotrophic dinocyst species in neritic regions characterised by high chlorophyll-aand low salinity conditions in surface waters. Phototrophic dinocyst species, notably Operculodinium centrocarpum, dominate in the more oceanic area. Differences in the distribution of phototrophic dinocyst species can be related to sea surface salinity and sea surface temperature gradients and to a lesser extent to chlorophyll-a concentrations. Apart from longitudinal gradients the dinocyst distribution clearly reflects regional environmental features. Six groups of species can be distinguished, characteristic for (1) coastal regions (cysts of Polykrikos kofoidii and Selenopemphix quanta), (2) the vicinity of active upwelling (Brigantedinium spp., Echinidinium aculeatum, Echinidinium spp. and Echinidinium transparantum), (3) river mouths (Lejeunecysta oliva, cysts of Protoperidinium americanum, Selenopemphix nephroides and Votadinium calvum), (4) slope and open ocean sediments (Dalella chathamense, Impagidinium patulum and Operculodinium centrocarpum, (5) the southern Benguela region (south of 24°S) (Spiniferites ramosus) and (6) the northern Benguela region (north of 24°S) (Nematosphaeropsis labyrinthus and Pyxidinopsis reticulata). No indication of overprint of the palaeo-ecological signal by lateral transport of allochthonous species could be observed.
Resumo:
The statistical distributions of different software properties have been thoroughly studied in the past, including software size, complexity and the number of defects. In the case of object-oriented systems, these distributions have been found to obey a power law, a common statistical distribution also found in many other fields. However, we have found that for some statistical properties, the behavior does not entirely follow a power law, but a mixture between a lognormal and a power law distribution. Our study is based on the Qualitas Corpus, a large compendium of diverse Java-based software projects. We have measured the Chidamber and Kemerer metrics suite for every file of every Java project in the corpus. Our results show that the range of high values for the different metrics follows a power law distribution, whereas the rest of the range follows a lognormal distribution. This is a pattern typical of so-called double Pareto distributions, also found in empirical studies for other software properties.
Resumo:
Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.
Resumo:
The technique of Abstract Interpretation has allowed the development of very sophisticated global program analyses which are at the same time provably correct and practical. We present in a tutorial fashion a novel program development framework which uses abstract interpretation as a fundamental tool. The framework uses modular, incremental abstract interpretation to obtain information about the program. This information is used to validate programs, to detect bugs with respect to partial specifications written using assertions (in the program itself and/or in system libraries), to generate and simplify run-time tests, and to perform high-level program transformations such as multiple abstract specialization, parallelization, and resource usage control, all in a provably correct way. In the case of validation and debugging, the assertions can refer to a variety of program points such as procedure entry, procedure exit, points within procedures, or global computations. The system can reason with much richer information than, for example, traditional types. This includes data structure shape (including pointer sharing), bounds on data structure sizes, and other operational variable instantiation properties, as well as procedure-level properties such as determinacy, termination, nonfailure, and bounds on resource consumption (time or space cost). CiaoPP, the preprocessor of the Ciao multi-paradigm programming system, which implements the described functionality, will be used to illustrate the fundamental ideas.
Resumo:
We present a tutorial overview of Ciaopp, the Ciao system preprocessor. Ciao is a public-domain, next-generation logic programming system, which subsumes ISO-Prolog and is specifically designed to a) be highly extensible via librarles and b) support modular program analysis, debugging, and optimization. The latter tasks are performed in an integrated fashion by Ciaopp. Ciaopp uses modular, incremental abstract interpretation to infer properties of program predicates and literals, including types, variable instantiation properties (including modes), non-failure, determinacy, bounds on computational cost, bounds on sizes of terms in the program, etc. Using such analysis information, Ciaopp can find errors at compile-time in programs and/or perform partial verification. Ciaopp checks how programs cali system librarles and also any assertions present in the program or in other modules used by the program. These assertions are also used to genérate documentation automatically. Ciaopp also uses analysis information to perform program transformations and optimizations such as múltiple abstract specialization, parallelization (including granularity control), and optimization of run-time tests for properties which cannot be checked completely at compile-time. We illustrate "hands-on" the use of Ciaopp in all these tasks. By design, Ciaopp is a generic tool, which can be easily tailored to perform these and other tasks for different LP and CLP dialects.