961 resultados para rules application algorithms
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
The Wigner higher order moment spectra (WHOS)are defined as extensions of the Wigner-Ville distribution (WD)to higher order moment spectra domains. A general class oftime-frequency higher order moment spectra is also defined interms of arbitrary higher order moments of the signal as generalizations of the Cohen’s general class of time-frequency representations. The properties of the general class of time-frequency higher order moment spectra can be related to theproperties of WHOS which are, in fact, extensions of the properties of the WD. Discrete time and frequency Wigner higherorder moment spectra (DTF-WHOS) distributions are introduced for signal processing applications and are shown to beimplemented with two FFT-based algorithms. One applicationis presented where the Wigner bispectrum (WB), which is aWHOS in the third-order moment domain, is utilized for thedetection of transient signals embedded in noise. The WB iscompared with the WD in terms of simulation examples andanalysis of real sonar data. It is shown that better detectionschemes can be derived, in low signal-to-noise ratio, when theWB is applied.
Resumo:
European regulatory networks (ERNs) are in charge of producing and disseminating non-bindings standards, guidelines and recommendations in a number of important domains, such as banking and finance, electricity and gas, telecommunications, and competition regulation. The goal of these soft rules is to promote 'best practices', achieve co-ordination among regulatory authorities and ensure the consistent application of harmonized pro-competition rules across Europe. This contribution examines the domestic adoption of the soft rules developed within the four main ERNs. Different factors are expected to influence the process of domestic adoption: the resources of regulators; the existence of a review panel; and the interdependence of the issues at stake. The empirical analysis supports hypotheses about the relevance of network-level factors: monitoring and public reporting procedures increase the final level of adoption, while soft rules concerning highly interdependent policy areas are adopted earlier.
Resumo:
Tämä diplomityö kuvaa viestintä sovelluksen ytimen kehitystyön Symbian-alustalle. Koko sovelluksen vaatimuksena oli vastaamattomiin puheluihin vastaaminen ennalta määritellyillä tekstiviesteillä käyttäjän määrittelemien sääntöjen mukaisesti. Ei-toiminnallisia vaatimuksia olivat resurssien käytön vähentäminen ja uudelleenkäytön mahdollistaminen. Täten tämän työn tavoitteena oli kehittää ydin, joka kapseloi sovelluksen sellaisen toiminnallisuuden, joka on käyttöliittymästä riippumatonta ja uudelleenkäytettävää. Kehitystyössä ohjasi Unified Process, joka on iteroiva, käyttötapauksien ohjaama ja arkkitehtuurikeskeinen ohjelmistoprosessi. Se kannusti käyttämään myös muita teollisuudenalan vakiintuneita menetelmiä, kuten suunnittelumalleja ja visuaalista mallintamista käyttäen Unified Modelling Languagea. Suunnittelumalleja käytettiin kehitystyön aikana ja ohjelmisto mallinnettiin visuaalisesti suunnittelun edistämiseksi ja selkiyttämiseksi. Alustan palveluita käytettiin hyväksi kehitysajan ja resurssien käytön minimoimiseksi. Ytimen päätehtäviksi määrättiin viestien lähettäminen sekä sääntöjen talletus ja tarkistaminen. Sovelluksen eri alueet, eli sovelluspalvelin ja käyttöliittymää, pystyivät käyttämään ydintä ja sillä ei ollut riippuvuuksia käyttöliittymätasolle. Täten resurssien käyttö väheni ja uudelleenkäytettävyys lisääntyi. Viestien lähettäminen toteutettiin Symbian-alustan menetelmin. Sääntöjen tallettamiseen tehtiin tallennuskehys, joka eristää sääntöjen sisäisen ja ulkoisen muodon. Tässä tapauksessa ulkoiseksi tallennustavaksi valittiin relaatiotietokanta. Sääntöjen tarkastaminen toteutettiin tavanomaisella olioiden yhteistoiminnalla. Päätavoite saavutettiin. tämä ja muut hyviksi arvioidut lopputulokset, kuten uudelleenkäytettävyys ja vähentynyt resurssien käyttö, arveltiin juontuvan suunnittelumallien ja Unified Processin käytöstä. Kyseiset menetelmät osoittivat mukautuvansa pieniinkin projekteihin. Menetelmien todettiin myös tukevan ja kannustavan kehitystyön aikaista oppimista, mikä oli välttämätöntä tässä tapauksessa.
Resumo:
Langevin Equations of Ginzburg-Landau form, with multiplicative noise, are proposed to study the effects of fluctuations in domain growth. These equations are derived from a coarse-grained methodology. The Cahn-Hiliard-Cook linear stability analysis predicts some effects in the transitory regime. We also derive numerical algorithms for the computer simulation of these equations. The numerical results corroborate the analytical predictions of the linear analysis. We also present simulation results for spinodal decomposition at large times.
Resumo:
Cette thèse rassemble une série de méta-analyses, c'est-à-dire d'analyses ayant pour objet des analyses produites par des sociologues (notamment celles résultant de l'application de méthodes de traitement des entretiens). Il s'agit d'une démarche réflexive visant les pratiques concrètes des sociologues. Celles-ci sont envisagées comme des activités gouvernées par des règles. Une part importante de cette thèse sera donc consacrée au développement d'un outil d'analyse « pragmatologique » (E. Durkheim), c'est-à-dire permettant l'étude des pratiques et des règles en rapport avec elles. Pour aborder les règles, la philosophie analytique d'inspiration wittgensteinienne apporte plusieurs propositions importantes. Les règles sont ainsi considérées comme des concepts d'air de famille : il n'y a pas de définitions communes recouvrant l'ensemble des règles. Pour étudier les règles, il convient alors de faire des distinctions à partir de leurs usages. Une de ces distinctions concerne la différence entre règles constitutives et règles régulatives : une règle constitutive crée une pratique (e.g. le mariage), alors qu'une règle régulative s'applique à des activités qui peuvent exister sans elle (e.g. les règles du savoir-vivre). L'activité méthodologique des sociologues repose et est contrainte par ces types de règles, qui sont pour l'essentiel implicites. Cette thèse vise donc à rendre compte, par la description et la codification des règles, du caractère normatif des méthodes dans les pratiques d'analyse de la sociologie. Elle insiste en particulier sur les limites logiques qu'instituent les règles constitutives, celles-ci rendant impossibles (et non pas interdites) certaines actions des sociologues. This thesis brings together a series of meta-analyzes, that is, analyzes that tackle analyzes produced by sociologists (notably those resulting from the application of methods in treating interviews). The approach is reflexive and aimed at the concrete practices of sociologists, considered as activities governed by rules. An important part of this thesis is therefore devoted to the development of a "pragmatological" analytical tool (Durkheim) to conduct a study of such practices and of the rules that govern them. To approach these rules, Wittgenstein-inspired analytic philosophy offers several important proposals. The rules are, at first, seen as concepts of family resemblance, assuming that there is no common definition accounting for all rules. In order to conduct the study of such rules, it is therefore necessary to discern how they are respectively used. One of these distinctions concerns the difference between constitutive rules and regulative rules: a constitutive rule creates a practice (for example marriage), while a regulative rule applies to activities that can exist outside of the rule (for example, the rules of etiquette). The methodological activity of sociologists relies on, and is constrained by these types of rules, which are essentially implicit. Through the description and codification of rules, this thesis aims to account for the normative character of methods governing analytical practices in sociology. Particular emphasis is on the logical limits established by constitutive rules, limits that render several of the sociologist's actions impossible (rather than forbidden).
Resumo:
The fuzzy logic admits infinite intermediate logical values between false and true. With this principle, it developed in this study a system based on fuzzy rules, which indicates the body mass index of ruminant animals in order to obtain the best time to slaughter. The controller developed has as input the variables weight and height, and as output a new body mass index, called Fuzzy Body Mass Index (Fuzzy BMI), which may serve as a detection system at the time of livestock slaughtering, comparing one another by the linguistic variables "Very Low", "Low", "Average ", "High" and "Very High". For demonstrating the use application of this fuzzy system, an analysis was made with 147 Nellore beeves to determine Fuzzy BMI values for each animal and indicate the location of body mass of any herd. The performance validation of the system was based on a statistical analysis using the Pearson correlation coefficient of 0.923, representing a high positive correlation, indicating that the proposed method is appropriate. Thus, this method allows the evaluation of the herd comparing each animal within the group, thus providing a quantitative method of farmer decision. It was concluded that this study established a computational method based on fuzzy logic that mimics part of human reasoning and interprets the body mass index of any bovine species and in any region of the country.
Resumo:
Precision irrigation seeks to establish strategies which achieve an efficient ratio between the volume of water used (reduction in input) and the productivity obtained (increase in production). There are several studies in the literature on strategies for achieving this efficiency, such as those dealing with the method of volumetric water balance (VWB). However, it is also of great practical and economic interest to set up versatile implementations of irrigation strategies that: (i) maintain the performance obtained with other implementations, (ii) rely on few computational resources, (iii) adapt well to field conditions, and (iv) allow easy modification of the irrigation strategy. In this study, such characteristics are achieved when using an Artificial Neural Network (ANN) to determine the period of irrigation for a watermelon crop in the Irrigation Perimeter of the Lower Acaraú, in the state of Ceará, Brazil. The Volumetric Water Balance was taken as the standard for comparing the management carried out with the proposed implementation of ANN. The statistical analysis demonstrates the effectiveness of the proposed management, which is able to replace VWB as a strategy in automation.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
This thesis presents a novel design paradigm, called Virtual Runtime Application Partitions (VRAP), to judiciously utilize the on-chip resources. As the dark silicon era approaches, where the power considerations will allow only a fraction chip to be powered on, judicious resource management will become a key consideration in future designs. Most of the works on resource management treat only the physical components (i.e. computation, communication, and memory blocks) as resources and manipulate the component to application mapping to optimize various parameters (e.g. energy efficiency). To further enhance the optimization potential, in addition to the physical resources we propose to manipulate abstract resources (i.e. voltage/frequency operating point, the fault-tolerance strength, the degree of parallelism, and the configuration architecture). The proposed framework (i.e. VRAP) encapsulates methods, algorithms, and hardware blocks to provide each application with the abstract resources tailored to its needs. To test the efficacy of this concept, we have developed three distinct self adaptive environments: (i) Private Operating Environment (POE), (ii) Private Reliability Environment (PRE), and (iii) Private Configuration Environment (PCE) that collectively ensure that each application meets its deadlines using minimal platform resources. In this work several novel architectural enhancements, algorithms and policies are presented to realize the virtual runtime application partitions efficiently. Considering the future design trends, we have chosen Coarse Grained Reconfigurable Architectures (CGRAs) and Network on Chips (NoCs) to test the feasibility of our approach. Specifically, we have chosen Dynamically Reconfigurable Resource Array (DRRA) and McNoC as the representative CGRA and NoC platforms. The proposed techniques are compared and evaluated using a variety of quantitative experiments. Synthesis and simulation results demonstrate VRAP significantly enhances the energy and power efficiency compared to state of the art.
Resumo:
The credibility of the rules and the elements of power constitute fundamental keys in the analysis of the political institutions. This paper opens the "black box" of the European Union institutions and analyses the problem of credibility in the commitment of the Stability and Growth pact (SGP). This Pact (SGP) constituted a formal rule that tried to enforce budgetary discipline on the European States. Compliance with this contract could be ensured by the existence of "third party enforcement" or by the coincidence of the ex-ante and ex-post interests of the States (reputational capital). The fact is that states such as France or Germany failed to comply with the ruling and managed to avoid the application of sanctions. This article studies the transactions and the hierarchy of power that exists in the European institutions, and analyses the institutional framework included in the new European Constitution.
Resumo:
This research attempted to address the question of the role of explicit algorithms and episodic contexts in the acquisition of computational procedures for regrouping in subtraction. Three groups of students having difficulty learning to subtract with regrouping were taught procedures for doing so through either an explicit algorithm, an episodic content or an examples approach. It was hypothesized that the use of an explicit algorithm represented in a flow chart format would facilitate the acquisition and retention of specific procedural steps relative to the other two conditions. On the other hand, the use of paragraph stories to create episodic content was expected to facilitate the retrieval of algorithms, particularly in a mixed presentation format. The subjects were tested on similar, near, and far transfer questions over a four-day period. Near and far transfer algorithms were also introduced on Day Two. The results suggested that both explicit and episodic context facilitate performance on questions requiring subtraction with regrouping. However, the differential effects of these two approaches on near and far transfer questions were not as easy to identify. Explicit algorithms may facilitate the acquisition of specific procedural steps while at the same time inhibiting the application of such steps to transfer questions. Similarly, the value of episodic context in cuing the retrieval of an algorithm may be limited by the ability of a subject to identify and classify a new question as an exemplar of a particular episodically deflned problem type or category. The implications of these findings in relation to the procedures employed in the teaching of Mathematics to students with learning problems are discussed in detail.
Resumo:
This work investigates mathematical details and computational aspects of Metropolis-Hastings reptation quantum Monte Carlo and its variants, in addition to the Bounce method and its variants. The issues that concern us include the sensitivity of these algorithms' target densities to the position of the trial electron density along the reptile, time-reversal symmetry of the propagators, and the length of the reptile. We calculate the ground-state energy and one-electron properties of LiH at its equilibrium geometry for all these algorithms. The importance sampling is performed with a single-determinant large Slater-type orbitals (STO) basis set. The computer codes were written to exploit the efficiencies engineered into modern, high-performance computing software. Using the Bounce method in the calculation of non-energy-related properties, those represented by operators that do not commute with the Hamiltonian, is a novel work. We found that the unmodified Bounce gives good ground state energy and very good one-electron properties. We attribute this to its favourable time-reversal symmetry in its target density's Green's functions. Breaking this symmetry gives poorer results. Use of a short reptile in the Bounce method does not alter the quality of the results. This suggests that in future applications one can use a shorter reptile to cut down the computational time dramatically.
Resumo:
Feature selection plays an important role in knowledge discovery and data mining nowadays. In traditional rough set theory, feature selection using reduct - the minimal discerning set of attributes - is an important area. Nevertheless, the original definition of a reduct is restrictive, so in one of the previous research it was proposed to take into account not only the horizontal reduction of information by feature selection, but also a vertical reduction considering suitable subsets of the original set of objects. Following the work mentioned above, a new approach to generate bireducts using a multi--objective genetic algorithm was proposed. Although the genetic algorithms were used to calculate reduct in some previous works, we did not find any work where genetic algorithms were adopted to calculate bireducts. Compared to the works done before in this area, the proposed method has less randomness in generating bireducts. The genetic algorithm system estimated a quality of each bireduct by values of two objective functions as evolution progresses, so consequently a set of bireducts with optimized values of these objectives was obtained. Different fitness evaluation methods and genetic operators, such as crossover and mutation, were applied and the prediction accuracies were compared. Five datasets were used to test the proposed method and two datasets were used to perform a comparison study. Statistical analysis using the one-way ANOVA test was performed to determine the significant difference between the results. The experiment showed that the proposed method was able to reduce the number of bireducts necessary in order to receive a good prediction accuracy. Also, the influence of different genetic operators and fitness evaluation strategies on the prediction accuracy was analyzed. It was shown that the prediction accuracies of the proposed method are comparable with the best results in machine learning literature, and some of them outperformed it.