836 resultados para Framework Model
Resumo:
On the basis of convolutional (Hamming) version of recent Neural Network Assembly Memory Model (NNAMM) for intact two-layer autoassociative Hopfield network optimal receiver operating characteristics (ROCs) have been derived analytically. A method of taking into account explicitly a priori probabilities of alternative hypotheses on the structure of information initiating memory trace retrieval and modified ROCs (mROCs, a posteriori probabilities of correct recall vs. false alarm probability) are introduced. The comparison of empirical and calculated ROCs (or mROCs) demonstrates that they coincide quantitatively and in this way intensities of cues used in appropriate experiments may be estimated. It has been found that basic ROC properties which are one of experimental findings underpinning dual-process models of recognition memory can be explained within our one-factor NNAMM.
Resumo:
The demands towards the contemporary information systems are constantly increasing. In a dynamic business environment an organization has to be prepared for sudden growth, shrinking or other type of reorganization. Such change would bring the need of adaptation of the information system, servicing the company. The association of access rights to parts of the system with users, groups of users, user roles etc. is of great importance to defining the different activities in the company and the restrictions of the access rights for each employee, according to his status. The mechanisms for access rights management in a system are taken in account during the system design. In most cases they are build in the system. This paper offers an approach in user rights framework development that is applicable in information systems. This work presents a reusable extendable mechanism that can be integrated in information systems.
Resumo:
Тодор П. Чолаков, Димитър Й. Биров - Тази статия представя цялостен модел за автоматизиран реинженеринг на наследени системи. Тя описва в детайли процесите на превод на софтуера и на рефакторинг и степента, до която могат да се автоматизират тези процеси. По отношение на превода на код се представя модел за автоматизирано превеждане на код, съдържащ указатели и работа с адресна аритметика. Също така се дефинира рамка за процеса на реинженеринг и се набелязват възможности за по-нататъшно развитие на концепции, инструменти и алгоритми.
Resumo:
The thesis deals with standing and justiciability in climate litigation against governments and the private sector. The first part addresses the impacts of climate change on human rights, the major developments in international climate law, and the historical reasons for climate litigation. The second part analyses several cases, divided into categories. It then draws to a comparative conclusion with regard to each category. The third part deals with the Italian legal tradition on standing and justiciability – starting from the historical roots of such rules. The fourth part introduces the ‘Model Statute’ drafted by the International Bar Association, arguing that the 'ratio legis' of this proposal could be implemented in Italy or the EU. The thesis develops arguments, based on the existing legal framework, to help plaintiffs establish standing and justiciability in proceedings pending before Italian courts. It further proposes the idea that 'citizen suits' are consistent with the Italian and EU legal tradition and that the EU could rely on citizen suits to privately enforce its climate law and policies under the ‘European Green Deal.’
Resumo:
Resource specialisation, although a fundamental component of ecological theory, is employed in disparate ways. Most definitions derive from simple counts of resource species. We build on recent advances in ecophylogenetics and null model analysis to propose a concept of specialisation that comprises affinities among resources as well as their co-occurrence with consumers. In the distance-based specialisation index (DSI), specialisation is measured as relatedness (phylogenetic or otherwise) of resources, scaled by the null expectation of random use of locally available resources. Thus, specialists use significantly clustered sets of resources, whereas generalists use over-dispersed resources. Intermediate species are classed as indiscriminate consumers. The effectiveness of this approach was assessed with differentially restricted null models, applied to a data set of 168 herbivorous insect species and their hosts. Incorporation of plant relatedness and relative abundance greatly improved specialisation measures compared to taxon counts or simpler null models, which overestimate the fraction of specialists, a problem compounded by insufficient sampling effort. This framework disambiguates the concept of specialisation with an explicit measure applicable to any mode of affinity among resource classes, and is also linked to ecological and evolutionary processes. This will enable a more rigorous deployment of ecological specialisation in empirical and theoretical studies.
Resumo:
The nuclear gross theory, originally formulated by Takahashi and Yamada (1969 Prog. Theor. Phys. 41 1470) for the beta-decay, is applied to the electronic-neutrino nucleus reactions, employing a more realistic description of the energetics of the Gamow-Teller resonances. The model parameters are gauged from the most recent experimental data, both for beta(-)-decay and electron capture, separately for even-even, even-odd, odd-odd and odd-even nuclei. The numerical estimates for neutrino-nucleus cross-sections agree fairly well with previous evaluations done within the framework of microscopic models. The formalism presented here can be extended to the heavy nuclei mass region, where weak processes are quite relevant, which is of astrophysical interest because of its applications in supernova explosive nucleosynthesis.
Resumo:
The existence of juxtaposed regions of distinct cultures in spite of the fact that people's beliefs have a tendency to become more similar to each other's as the individuals interact repeatedly is a puzzling phenomenon in the social sciences. Here we study an extreme version of the frequency-dependent bias model of social influence in which an individual adopts the opinion shared by the majority of the members of its extended neighborhood, which includes the individual itself. This is a variant of the majority-vote model in which the individual retains its opinion in case there is a tie among the neighbors' opinions. We assume that the individuals are fixed in the sites of a square lattice of linear size L and that they interact with their nearest neighbors only. Within a mean-field framework, we derive the equations of motion for the density of individuals adopting a particular opinion in the single-site and pair approximations. Although the single-site approximation predicts a single opinion domain that takes over the entire lattice, the pair approximation yields a qualitatively correct picture with the coexistence of different opinion domains and a strong dependence on the initial conditions. Extensive Monte Carlo simulations indicate the existence of a rich distribution of opinion domains or clusters, the number of which grows with L(2) whereas the size of the largest cluster grows with ln L(2). The analysis of the sizes of the opinion domains shows that they obey a power-law distribution for not too large sizes but that they are exponentially distributed in the limit of very large clusters. In addition, similarly to other well-known social influence model-Axelrod's model-we found that these opinion domains are unstable to the effect of a thermal-like noise.
Resumo:
Product lifecycle management (PLM) innovates as it defines both the product as a central element to aggregate enterprise information and the lifecycle as a new time dimension for information integration and analysis. Because of its potential benefits to shorten innovation lead-times and to reduce costs, PLM has attracted a lot of attention at industry and at research. However, the current PLM implementation stage at most organisations still does not apply the lifecycle management concepts thoroughly. In order to close the existing realisation gap, this article presents a process oriented framework to support effective PLM implementation. The framework central point consists of a set of lifecycle oriented business process reference models which links the necessary fundamental concepts, enterprise knowledge and software solutions to effectively deploy PLM. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper presents some improvements in the model proposed by Machado et al. [Machado SL, Carvalho MF, Vilar OM. Constitutive model for municipal solid waste. J Geotech Geoenviron Eng ASCE 2002; 128(11):940-51] now considering the influence of biodegradation of organic matter in the mechanical behavior of municipal solid waste. The original framework considers waste as composed of two component groups; fibers and organic paste. The particular laws of behavior are assessed for each component group and then coupled to represent waste behavior. The improvements introduced in this paper take into account the changes in the properties of fibers and mass loss due to organic matter depletion over time. Mass loss is indirectly calculated considering the MSW gas generation potential through a first order decay model. It is shown that as the biodegradation process occurs the proportion of fibers increases, however, they also undergo a degradation process which tends to reduce their ultimate tensile stress and Young modulus. The way these changes influence the behavior of MSW is incorporated in the final framework which captures the main features of the MSW stress-strain behavior under different loading conditions. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
In this paper the continuous Verhulst dynamic model is used to synthesize a new distributed power control algorithm (DPCA) for use in direct sequence code division multiple access (DS-CDMA) systems. The Verhulst model was initially designed to describe the population growth of biological species under food and physical space restrictions. The discretization of the corresponding differential equation is accomplished via the Euler numeric integration (ENI) method. Analytical convergence conditions for the proposed DPCA are also established. Several properties of the proposed recursive algorithm, such as Euclidean distance from optimum vector after convergence, convergence speed, normalized mean squared error (NSE), average power consumption per user, performance under dynamics channels, and implementation complexity aspects, are analyzed through simulations. The simulation results are compared with two other DPCAs: the classic algorithm derived by Foschini and Miljanic and the sigmoidal of Uykan and Koivo. Under estimated errors conditions, the proposed DPCA exhibits smaller discrepancy from the optimum power vector solution and better convergence (under fixed and adaptive convergence factor) than the classic and sigmoidal DPCAs. (C) 2010 Elsevier GmbH. All rights reserved.
Resumo:
There are many techniques for electricity market price forecasting. However, most of them are designed for expected price analysis rather than price spike forecasting. An effective method of predicting the occurrence of spikes has not yet been observed in the literature so far. In this paper, a data mining based approach is presented to give a reliable forecast of the occurrence of price spikes. Combined with the spike value prediction techniques developed by the same authors, the proposed approach aims at providing a comprehensive tool for price spike forecasting. In this paper, feature selection techniques are firstly described to identify the attributes relevant to the occurrence of spikes. A simple introduction to the classification techniques is given for completeness. Two algorithms: support vector machine and probability classifier are chosen to be the spike occurrence predictors and are discussed in details. Realistic market data are used to test the proposed model with promising results.
Resumo:
Modeling volcanic phenomena is complicated by free-surfaces often supporting large rheological gradients. Analytical solutions and analogue models provide explanations for fundamental characteristics of lava flows. But more sophisticated models are needed, incorporating improved physics and rheology to capture realistic events. To advance our understanding of the flow dynamics of highly viscous lava in Peléean lava dome formation, axi-symmetrical Finite Element Method (FEM) models of generic endogenous dome growth have been developed. We use a novel technique, the level-set method, which tracks a moving interface, leaving the mesh unaltered. The model equations are formulated in an Eulerian framework. In this paper we test the quality of this technique in our numerical scheme by considering existing analytical and experimental models of lava dome growth which assume a constant Newtonian viscosity. We then compare our model against analytical solutions for real lava domes extruded on Soufrière, St. Vincent, W.I. in 1979 and Mount St. Helens, USA in October 1980 using an effective viscosity. The level-set method is found to be computationally light and robust enough to model the free-surface of a growing lava dome. Also, by modeling the extruded lava with a constant pressure head this naturally results in a drop in extrusion rate with increasing dome height, which can explain lava dome growth observables more appropriately than when using a fixed extrusion rate. From the modeling point of view, the level-set method will ultimately provide an opportunity to capture more of the physics while benefiting from the numerical robustness of regular grids.
Resumo:
A simple theoretical framework is presented for bioassay studies using three component in vitro systems. An equilibrium model is used to derive equations useful for predicting changes in biological response after addition of hormone-binding-protein or as a consequence of increased hormone affinity. Sets of possible solutions for receptor occupancy and binding protein occupancy are found for typical values of receptor and binding protein affinity constants. Unique equilibrium solutions are dictated by the initial condition of total hormone concentration. According to the occupancy theory of drug action, increasing the affinity of a hormone for its receptor will result in a proportional increase in biological potency. However, the three component model predicts that the magnitude of increase in biological potency will be a small fraction of the proportional increase in affinity. With typical initial conditions a two-fold increase in hormone affinity for its receptor is predicted to result in only a 33% increase in biological response. Under the same conditions an Ii-fold increase in hormone affinity for receptor would be needed to produce a two-fold increase in biological potency. Some currently used bioassay systems may be unrecognized three component systems and gross errors in biopotency estimates will result if the effect of binding protein is not calculated. An algorithm derived from the three component model is used to predict changes in biological response after addition of binding protein to in vitro systems. The algorithm is tested by application to a published data set from an experimental study in an in vitro system (Lim et al., 1990, Endocrinology 127, 1287-1291). Predicted changes show good agreement (within 8%) with experimental observations. (C) 1998 Academic Press Limited.
Resumo:
The integrable open-boundary conditions for the model of three coupled one-dimensional XY spin chains are considered in the framework of the quantum inverse scattering method. The diagonal boundary K-matrices are found and a class of integrable boundary terms is determined. The boundary model Hamiltonian is solved by using the coordinate space Bethe ansatz technique and Bethe ansatz equations are derived. (C) 1998 Elsevier Science B.V.
Resumo:
The Montreal Process indicators are intended to provide a common framework for assessing and reviewing progress toward sustainable forest management. The potential of a combined geometrical-optical/spectral mixture analysis model was assessed for mapping the Montreal Process age class and successional age indicators at a regional scale using Landsat Thematic data. The project location is an area of eucalyptus forest in Emu Creek State Forest, Southeast Queensland, Australia. A quantitative model relating the spectral reflectance of a forest to the illumination geometry, slope, and aspect of the terrain surface and the size, shape, and density, and canopy size. Inversion of this model necessitated the use of spectral mixture analysis to recover subpixel information on the fractional extent of ground scene elements (such as sunlit canopy, shaded canopy, sunlit background, and shaded background). Results obtained fron a sensitivity analysis allowed improved allocation of resources to maximize the predictive accuracy of the model. It was found that modeled estimates of crown cover projection, canopy size, and tree densities had significant agreement with field and air photo-interpreted estimates. However, the accuracy of the successional stage classification was limited. The results obtained highlight the potential for future integration of high and moderate spatial resolution-imaging sensors for monitoring forest structure and condition. (C) Elsevier Science Inc., 2000.