979 resultados para DYNAMIC FEATURES


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Population-based metaheuristics, such as particle swarm optimization (PSO), have been employed to solve many real-world optimization problems. Although it is of- ten sufficient to find a single solution to these problems, there does exist those cases where identifying multiple, diverse solutions can be beneficial or even required. Some of these problems are further complicated by a change in their objective function over time. This type of optimization is referred to as dynamic, multi-modal optimization. Algorithms which exploit multiple optima in a search space are identified as niching algorithms. Although numerous dynamic, niching algorithms have been developed, their performance is often measured solely on their ability to find a single, global optimum. Furthermore, the comparisons often use synthetic benchmarks whose landscape characteristics are generally limited and unknown. This thesis provides a landscape analysis of the dynamic benchmark functions commonly developed for multi-modal optimization. The benchmark analysis results reveal that the mechanisms responsible for dynamism in the current dynamic bench- marks do not significantly affect landscape features, thus suggesting a lack of representation for problems whose landscape features vary over time. This analysis is used in a comparison of current niching algorithms to identify the effects that specific landscape features have on niching performance. Two performance metrics are proposed to measure both the scalability and accuracy of the niching algorithms. The algorithm comparison results demonstrate the algorithms best suited for a variety of dynamic environments. This comparison also examines each of the algorithms in terms of their niching behaviours and analyzing the range and trade-off between scalability and accuracy when tuning the algorithms respective parameters. These results contribute to the understanding of current niching techniques as well as the problem features that ultimately dictate their success.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Atmospheric Boundary layer (ABL) is the layer just above the earth surface and is influenced by the surface forcing within a short period of an hour or less. In this thesis, characteristics of the boundary layer over ocean, coastal and inland areas of the atmosphere, especially over the monsoon regime are thoroughly studied. The study of the coastal zone is important due to its high vulnerability mainly due to sea breeze circulation and associated changes in the atmospheric boundary layer. The major scientific problems addressed in this thesis are diurnal and seasonal variation of coastal meteorological properties, the characteristic difference in the ABL during active and weak monsoons, features of ABL over marine environment and the variation of the boundary layer structure over an inland station. The thesis describes the various features in the ABL associated with the active and weak monsoons and, the surface boundary layer properties associated with the active and weak epochs. The study provides knowledge on MABL and can be used as the estimated values of boundary layer parameters over the marine atmosphere and to know the values and variabilities of the ABL parameters such as surface wind, surface friction, drag coefficient, wind stress and wind stress curl.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditionally, we've focussed on the question of how to make a system easy to code the first time, or perhaps on how to ease the system's continued evolution. But if we look at life cycle costs, then we must conclude that the important question is how to make a system easy to operate. To do this we need to make it easy for the operators to see what's going on and to then manipulate the system so that it does what it is supposed to. This is a radically different criterion for success. What makes a computer system visible and controllable? This is a difficult question, but it's clear that today's modern operating systems with nearly 50 million source lines of code are neither. Strikingly, the MIT Lisp Machine and its commercial successors provided almost the same functionality as today's mainstream sytsems, but with only 1 Million lines of code. This paper is a retrospective examination of the features of the Lisp Machine hardware and software system. Our key claim is that by building the Object Abstraction into the lowest tiers of the system, great synergy and clarity were obtained. It is our hope that this is a lesson that can impact tomorrow's designs. We also speculate on how the spirit of the Lisp Machine could be extended to include a comprehensive access control model and how new layers of abstraction could further enrich this model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La gestió de xarxes és un camp molt ampli i inclou molts aspectes diferents. Aquesta tesi doctoral està centrada en la gestió dels recursos en les xarxes de banda ampla que disposin de mecanismes per fer reserves de recursos, com per exemple Asynchronous Transfer Mode (ATM) o Multi-Protocol Label Switching (MPLS). Es poden establir xarxes lògiques utilitzant els Virtual Paths (VP) d'ATM o els Label Switched Paths (LSP) de MPLS, als que anomenem genèricament camins lògics. Els usuaris de la xarxa utilitzen doncs aquests camins lògics, que poden tenir recursos assignats, per establir les seves comunicacions. A més, els camins lògics són molt flexibles i les seves característiques es poden canviar dinàmicament. Aquest treball, se centra, en particular, en la gestió dinàmica d'aquesta xarxa lògica per tal de maximitzar-ne el rendiment i adaptar-la a les connexions ofertes. En aquest escenari, hi ha diversos mecanismes que poden afectar i modificar les característiques dels camins lògics (ample de banda, ruta, etc.). Aquests mecanismes inclouen els de balanceig de la càrrega (reassignació d'ample de banda i reencaminament) i els de restauració de fallades (ús de camins lògics de backup). Aquests dos mecanismes poden modificar la xarxa lògica i gestionar els recursos (ample de banda) dels enllaços físics. Per tant, existeix la necessitat de coordinar aquests mecanismes per evitar possibles interferències. La gestió de recursos convencional que fa ús de la xarxa lògica, recalcula periòdicament (per exemple cada hora o cada dia) tota la xarxa lògica d'una forma centralitzada. Això introdueix el problema que els reajustaments de la xarxa lògica no es realitzen en el moment en què realment hi ha problemes. D'altra banda també introdueix la necessitat de mantenir una visió centralitzada de tota la xarxa. En aquesta tesi, es proposa una arquitectura distribuïda basada en un sistema multi agent. L'objectiu principal d'aquesta arquitectura és realitzar de forma conjunta i coordinada la gestió de recursos a nivell de xarxa lògica, integrant els mecanismes de reajustament d'ample de banda amb els mecanismes de restauració preplanejada, inclosa la gestió de l'ample de banda reservada per a la restauració. Es proposa que aquesta gestió es porti a terme d'una forma contínua, no periòdica, actuant quan es detecta el problema (quan un camí lògic està congestionat, o sigui, quan està rebutjant peticions de connexió dels usuaris perquè està saturat) i d'una forma completament distribuïda, o sigui, sense mantenir una visió global de la xarxa. Així doncs, l'arquitectura proposada realitza petits rearranjaments a la xarxa lògica adaptant-la d'una forma contínua a la demanda dels usuaris. L'arquitectura proposada també té en consideració altres objectius com l'escalabilitat, la modularitat, la robustesa, la flexibilitat i la simplicitat. El sistema multi agent proposat està estructurat en dues capes d'agents: els agents de monitorització (M) i els de rendiment (P). Aquests agents estan situats en els diferents nodes de la xarxa: hi ha un agent P i diversos agents M a cada node; aquests últims subordinats als P. Per tant l'arquitectura proposada es pot veure com una jerarquia d'agents. Cada agent és responsable de monitoritzar i controlar els recursos als que està assignat. S'han realitzat diferents experiments utilitzant un simulador distribuït a nivell de connexió proposat per nosaltres mateixos. Els resultats mostren que l'arquitectura proposada és capaç de realitzar les tasques assignades de detecció de la congestió, reassignació dinàmica d'ample de banda i reencaminament d'una forma coordinada amb els mecanismes de restauració preplanejada i gestió de l'ample de banda reservat per la restauració. L'arquitectura distribuïda ofereix una escalabilitat i robustesa acceptables gràcies a la seva flexibilitat i modularitat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The proposal presented in this thesis is to provide designers of knowledge based supervisory systems of dynamic systems with a framework to facilitate their tasks avoiding interface problems among tools, data flow and management. The approach is thought to be useful to both control and process engineers in assisting their tasks. The use of AI technologies to diagnose and perform control loops and, of course, assist process supervisory tasks such as fault detection and diagnose, are in the scope of this work. Special effort has been put in integration of tools for assisting expert supervisory systems design. With this aim the experience of Computer Aided Control Systems Design (CACSD) frameworks have been analysed and used to design a Computer Aided Supervisory Systems (CASSD) framework. In this sense, some basic facilities are required to be available in this proposed framework: ·

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In molecular biology, it is often desirable to find common properties in large numbers of drug candidates. One family of methods stems from the data mining community, where algorithms to find frequent graphs have received increasing attention over the past years. However, the computational complexity of the underlying problem and the large amount of data to be explored essentially render sequential algorithms useless. In this paper, we present a distributed approach to the frequent subgraph mining problem to discover interesting patterns in molecular compounds. This problem is characterized by a highly irregular search tree, whereby no reliable workload prediction is available. We describe the three main aspects of the proposed distributed algorithm, namely, a dynamic partitioning of the search space, a distribution process based on a peer-to-peer communication framework, and a novel receiverinitiated load balancing algorithm. The effectiveness of the distributed method has been evaluated on the well-known National Cancer Institute’s HIV-screening data set, where we were able to show close-to linear speedup in a network of workstations. The proposed approach also allows for dynamic resource aggregation in a non dedicated computational environment. These features make it suitable for large-scale, multi-domain, heterogeneous environments, such as computational grids.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing demand for ecosystem services, in conjunction with climate change, is expected to signif- icantly alter terrestrial ecosystems. In order to evaluate the sustainability of land and water resources, there is a need for a better understanding of the relationships between crop production, land surface characteristics and the energy and water cycles. These relationships are analysed using the Joint UK Land Environment Simulator (JULES). JULES includes the full hydrological cycle and vegetation effects on the energy, water, and carbon fluxes. However, this model currently only simulates land surface processes in natural ecosystems. An adapted version of JULES for agricultural ecosystems, called JULES-SUCROS has therefore been developed. In addition to overall model improvements, JULES-SUCROS includes a dynamic crop growth structure that fully fits within and builds upon the biogeochemical modelling framework for natural vegetation. Specific agro-ecosystem features such as the development of yield-bearing organs and the phenological cycle from sowing till harvest have been included in the model. This paper describes the structure of JULES-SUCROS and evaluates the fluxes simulated with this model against FLUXNET measurements at 6 European sites. We show that JULES-SUCROS significantly improves the correlation between simulated and observed fluxes over cropland and captures well the spatial and temporal vari- ability of the growth conditions in Europe. Simulations with JULES-SUCROS highlight the importance of vegetation structure and phenology, and the impact they have on land–atmosphere interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The suggestion is discussed that characteristic particle and field signatures at the dayside magnetopause, termed “flux transfer events” (FTEs), are, in at least some cases, due to transient solar wind and/or magnetosheath dynamic pressure increases, rather than time-dependent magnetic reconnection. It is found that most individual cases of FTEs observed by a single spacecraft can, at least qualitatively, be explained by the pressure pulse model, provided a few rather unsatisfactory features of the predictions are explained in terms of measurement uncertainties. The most notable exceptions to this are some “two-regime” observations made by two satellites simultaneously, one on either side of the magnetopause. However, this configuration has not been frequently achieved for sufficient time, such observations are rare, and the relevant tests are still not conclusive. The strongest evidence that FTEs are produced by magnetic reconnection is the dependence of their occurrence on the north-south component of the interplanetary magnetic field (IMF) or of the magnetosheath field. The pressure pulse model provides an explanation for this dependence (albeit qualitative) in the case of magnetosheath FTEs, but this does not apply to magnetosphere FTEs. The only surveys of magnetosphere FTEs have not employed the simultaneous IMF, but have shown that their occurrence is strongly dependent on the north-south component of the magnetosheath field, as observed earlier/later on the same magnetopause crossing (for inbound/outbound passes, respectively). This paper employs statistics on the variability of the IMF orientation to investigate the effects of IMF changes between the times of the magnetosheath and FTE observations. It is shown that the previously published results are consistent with magnetospheric FTEs being entirely absent when the magnetosheath field is northward: all crossings with magnetosphere FTEs and a northward field can be attributed to the field changing sense while the satellite was within the magnetosphere (but close enough to the magnetopause to detect an FTE). Allowance for the IMF variability also makes the occurrence frequency of magnetosphere FTEs during southward magnetosheath fields very similar to that observed for magnetosheath FTEs. Conversely, the probability of attaining the observed occurrence frequencies for the pressure pulse model is 10−14. In addition, it is argued that some magnetosheath FTEs should, for the pressure pulse model, have been observed for northward IMF: the probability that the number is as low as actually observed is estimated to be 10−10. It is concluded that although the pressure model can be invoked to qualitatively explain a large number of individual FTE observations, the observed occurrence statistics are in gross disagreement with this model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For general home monitoring, a system should automatically interpret people’s actions. The system should be non-intrusive, and able to deal with a cluttered background, and loose clothes. An approach based on spatio-temporal local features and a Bag-of-Words (BoW) model is proposed for single-person action recognition from combined intensity and depth images. To restore the temporal structure lost in the traditional BoW method, a dynamic time alignment technique with temporal binning is applied in this work, which has not been previously implemented in the literature for human action recognition on depth imagery. A novel human action dataset with depth data has been created using two Microsoft Kinect sensors. The ReadingAct dataset contains 20 subjects and 19 actions for a total of 2340 videos. To investigate the effect of using depth images and the proposed method, testing was conducted on three depth datasets, and the proposed method was compared to traditional Bag-of-Words methods. Results showed that the proposed method improves recognition accuracy when adding depth to the conventional intensity data, and has advantages when dealing with long actions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electrical methods of geophysical survey are known to produce results that are hard to predict at different times of the year, and under differing weather conditions. This is a problem which can lead to misinterpretation of archaeological features under investigation. The dynamic relationship between a ‘natural’ soil matrix and an archaeological feature is a complex one, which greatly affects the success of the feature’s detection when using active electrical methods of geophysical survey. This study has monitored the gradual variation of measured resistivity over a selection of study areas. By targeting difficult to find, and often ‘missing’ electrical anomalies of known archaeological features, this study has increased the understanding of both the detection and interpretation capabilities of such geophysical surveys. A 16 month time-lapse study over 4 archaeological features has taken place to investigate the aforementioned detection problem across different soils and environments. In addition to the commonly used Twin-Probe earth resistance survey, electrical resistivity imaging (ERI) and quadrature electro-magnetic induction (EMI) were also utilised to explore the problem. Statistical analyses have provided a novel interpretation, which has yielded new insights into how the detection of archaeological features is influenced by the relationship between the target feature and the surrounding ‘natural’ soils. The study has highlighted both the complexity and previous misconceptions around the predictability of the electrical methods. The analysis has confirmed that each site provides an individual and nuanced situation, the variation clearly relating to the composition of the soils (particularly pore size) and the local weather history. The wide range of reasons behind survey success at each specific study site has been revealed. The outcomes have shown that a simplistic model of seasonality is not universally applicable to the electrical detection of archaeological features. This has led to the development of a method for quantifying survey success, enabling a deeper understanding of the unique way in which each site is affected by the interaction of local environmental and geological conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we construct a dynamic portrait of the inner asteroidal belt. We use information about the distribution of test particles, which were initially placed on a perfectly rectangular grid of initial conditions, after 4.2 Myr of gravitational interactions with the Sun and five planets, from Mars to Neptune. Using the spectral analysis method introduced by Michtchenko et al., the asteroidal behaviour is illustrated in detail on the dynamical, averaged and frequency maps. On the averaged and frequency maps, we superpose information on the proper elements and proper frequencies of real objects, extracted from the data base, AstDyS, constructed by Milani and Knezevic. A comparison of the maps with the distribution of real objects allows us to detect possible dynamical mechanisms acting in the domain under study; these mechanisms are related to mean-motion and secular resonances. We note that the two- and three-body mean-motion resonances and the secular resonances (strong linear and weaker non-linear) have an important role in the diffusive transportation of the objects. Their long-lasting action, overlaid with the Yarkovsky effect, may explain many observed features of the density, size and taxonomic distributions of the asteroids.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relationship between the structure and function of biological networks constitutes a fundamental issue in systems biology. Particularly, the structure of protein-protein interaction networks is related to important biological functions. In this work, we investigated how such a resilience is determined by the large scale features of the respective networks. Four species are taken into account, namely yeast Saccharomyces cerevisiae, worm Caenorhabditis elegans, fly Drosophila melanogaster and Homo sapiens. We adopted two entropy-related measurements (degree entropy and dynamic entropy) in order to quantify the overall degree of robustness of these networks. We verified that while they exhibit similar structural variations under random node removal, they differ significantly when subjected to intentional attacks (hub removal). As a matter of fact, more complex species tended to exhibit more robust networks. More specifically, we quantified how six important measurements of the networks topology (namely clustering coefficient, average degree of neighbors, average shortest path length, diameter, assortativity coefficient, and slope of the power law degree distribution) correlated with the two entropy measurements. Our results revealed that the fraction of hubs and the average neighbor degree contribute significantly for the resilience of networks. In addition, the topological analysis of the removed hubs indicated that the presence of alternative paths between the proteins connected to hubs tend to reinforce resilience. The performed analysis helps to understand how resilience is underlain in networks and can be applied to the development of protein network models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Patino Formation sandstones, which crop out in Aregua neighborhood in Eastern Paraguay and show columnar joints near the contact zone with a nephelinite dyke, have as their main characteristics the high proportion of syntaxial quartz overgrowth and a porosity originated from different processes, initially by dissolution and later by partial filling and fracturing. Features like the presence of floating grains in the syntaxial cement, the transitional interpenetrative contact between the silica-rich cement and grains as well as the intense fracture porosity are strong indications that the cement has been formed by dissolution and reprecipitation of quartz from the framework under the effect of thermal expansion followed by rapid contraction. The increase of the silica-rich cement towards the dyke in association with the orthogonal disposition of the columns relative to dyke walls are indicative that the igneous body may represent the main heat source for the interstitial aqueous solutions previously existing in the sediments. At macroscopic scale, the increasing of internal tensions in the sandstones is responsible for the nucleation of polygons, leading to the individualization of prisms, which are interconnected by a system of joints, formed firstly on isotherm surfaces of low temperature and later on successive adjacent planes towards the dyke heat source.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the key issues in e-learning environments is the possibility of creating and evaluating exercises. However, the lack of tools supporting the authoring and automatic checking of exercises for specifics topics (e.g., geometry) drastically reduces advantages in the use of e-learning environments on a larger scale, as usually happens in Brazil. This paper describes an algorithm, and a tool based on it, designed for the authoring and automatic checking of geometry exercises. The algorithm dynamically compares the distances between the geometric objects of the student`s solution and the template`s solution, provided by the author of the exercise. Each solution is a geometric construction which is considered a function receiving geometric objects (input) and returning other geometric objects (output). Thus, for a given problem, if we know one function (construction) that solves the problem, we can compare it to any other function to check whether they are equivalent or not. Two functions are equivalent if, and only if, they have the same output when the same input is applied. If the student`s solution is equivalent to the template`s solution, then we consider the student`s solution as a correct solution. Our software utility provides both authoring and checking tools to work directly on the Internet, together with learning management systems. These tools are implemented using the dynamic geometry software, iGeom, which has been used in a geometry course since 2004 and has a successful track record in the classroom. Empowered with these new features, iGeom simplifies teachers` tasks, solves non-trivial problems in student solutions and helps to increase student motivation by providing feedback in real time. (c) 2008 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The predominant knowledge-based approach to automated model construction, compositional modelling, employs a set of models of particular functional components. Its inference mechanism takes a scenario describing the constituent interacting components of a system and translates it into a useful mathematical model. This paper presents a novel compositional modelling approach aimed at building model repositories. It furthers the field in two respects. Firstly, it expands the application domain of compositional modelling to systems that can not be easily described in terms of interacting functional components, such as ecological systems. Secondly, it enables the incorporation of user preferences into the model selection process. These features are achieved by casting the compositional modelling problem as an activity-based dynamic preference constraint satisfaction problem, where the dynamic constraints describe the restrictions imposed over the composition of partial models and the preferences correspond to those of the user of the automated modeller. In addition, the preference levels are represented through the use of symbolic values that differ in orders of magnitude.