970 resultados para Fractional Laplace and Dirac operators


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Työn tavoitteena oli kuvata kotimaisia lääkemarkkinoita koko arvoketjun alueelta, lähtien lääketehtaista ja päättyen vähittäismyyntiin tai sairaalajakeluun. Lisäksi työssä kuvattiin julkisen terveydenhuollon ostopalveluiden tilaa erityisesti logistiikkapalveluiden osalta. Markkinoiden kuvausta hyödynnettiin työssä Suomalaisen logistiikkayrityksen markkinapotentiaalin määrittämisessä kyseisille markkinoille. Työn toinen tavoite oli tarkastella markkinoita tulevaisuustutkimuksen työkaluja käyttämällä, ja luoda kyseisille markkinoille skenaarioita 10–15 vuoden päähän nykyhetkestä. Skenaarioiden ja markkinapotentiaaliselvityksen perusteella on työn lopputulemana luotu toimintaehdotuksia asiakasyritykselle koskien selvityksen alla olevien markkinoiden houkuttelevuutta yrityksen kannalta. Selvityksessä käytettävä tieto on kerätty julkisten raporttien ja selvitysten pohjalta, sekä tuottamalla laajamittainen haastattelututkimus yksityisten ja julkisten toimijoiden kautta läpi kotimaisen lääkkeen ja sairaalatarvikkeen arvoketjun. Haastattelututkimus on työn kannalta merkittävin tiedon lähde ja työssä hyödynnetty hiljaisten signaalien tulkinta perustuu haastatteluiden kautta saatuihin tietoihin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hub location problem is an NP-hard problem that frequently arises in the design of transportation and distribution systems, postal delivery networks, and airline passenger flow. This work focuses on the Single Allocation Hub Location Problem (SAHLP). Genetic Algorithms (GAs) for the capacitated and uncapacitated variants of the SAHLP based on new chromosome representations and crossover operators are explored. The GAs is tested on two well-known sets of real-world problems with up to 200 nodes. The obtained results are very promising. For most of the test problems the GA obtains improved or best-known solutions and the computational time remains low. The proposed GAs can easily be extended to other variants of location problems arising in network design planning in transportation systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Feature selection plays an important role in knowledge discovery and data mining nowadays. In traditional rough set theory, feature selection using reduct - the minimal discerning set of attributes - is an important area. Nevertheless, the original definition of a reduct is restrictive, so in one of the previous research it was proposed to take into account not only the horizontal reduction of information by feature selection, but also a vertical reduction considering suitable subsets of the original set of objects. Following the work mentioned above, a new approach to generate bireducts using a multi--objective genetic algorithm was proposed. Although the genetic algorithms were used to calculate reduct in some previous works, we did not find any work where genetic algorithms were adopted to calculate bireducts. Compared to the works done before in this area, the proposed method has less randomness in generating bireducts. The genetic algorithm system estimated a quality of each bireduct by values of two objective functions as evolution progresses, so consequently a set of bireducts with optimized values of these objectives was obtained. Different fitness evaluation methods and genetic operators, such as crossover and mutation, were applied and the prediction accuracies were compared. Five datasets were used to test the proposed method and two datasets were used to perform a comparison study. Statistical analysis using the one-way ANOVA test was performed to determine the significant difference between the results. The experiment showed that the proposed method was able to reduce the number of bireducts necessary in order to receive a good prediction accuracy. Also, the influence of different genetic operators and fitness evaluation strategies on the prediction accuracy was analyzed. It was shown that the prediction accuracies of the proposed method are comparable with the best results in machine learning literature, and some of them outperformed it.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Le rôle intégratif que la Cour de justice des Communautés européennes (CJCE) a joué dans la construction européenne est bien connu et très documenté. Ce qui l'est moins ce sont les raisons qui l'ont motivé, et le motivent encore. Si certains se sont déjà penchés sur cette question, un aspect a néanmoins été complètement négligé, celui de l'influence qu'a pu avoir à cet égard le contexte conjoncturel sur la jurisprudence communautaire et plus précisément sur l'orientation que la Cour a choisi de lui donner. Dans ce cadre, les auditoires de la Cour ont un rôle déterminant. Pour s'assurer d'une bonne application de ses décisions, la Cour est en effet amenée à prendre en considération les attentes des États membres, des institutions européennes, de la communauté juridique (tribunaux nationaux, avocats généraux, doctrine et praticiens) et des ressortissants européens (citoyens et opérateurs économiques). Aussi, à la question du pourquoi la CJCE décide (ou non) d'intervenir, dans le domaine de la libre circulation des marchandises, en faveur de l'intégration économique européenne, j'avance l'hypothèse suivante: l'intervention de la Cour dépend d'une variable centrale : les auditoires, dont les attentes (et leur poids respectif) sont elles-mêmes déterminées par le contexte conjoncturel. L'objectif est de faire ressortir l'aspect plus idéologique de la prise de décision de la Cour, largement méconnu par la doctrine, et de démontrer que le caractère fluctuant de la jurisprudence communautaire dans ce domaine, et en particulier dans l'interprétation de l'article 28 du traité CE, s'explique par la prise en compte par la Cour des attentes de ses auditoires, lesquels ont majoritairement adhéré à l'idéologie néolibérale. Afin de mieux saisir le poids - variable - de chaque auditoire de la Cour, j'apprécierai, dans une première partie, le contexte conjoncturel de la construction européenne de 1990 à 2006 et notamment le virage néolibéral que celle-ci a opéré. L'étude des auditoires et de leur impact sur la jurisprudence fera l'objet de la seconde partie de ma thèse. Je montrerai ainsi que la jurisprudence communautaire est une jurisprudence « sous influence », essentiellement au service de la réalisation puis de l'approfondissement du marché intérieur européen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Combinational digital circuits can be evolved automatically using Genetic Algorithms (GA). Until recently this technique used linear chromosomes and and one dimensional crossover and mutation operators. In this paper, a new method for representing combinational digital circuits as 2 Dimensional (2D) chromosomes and suitable 2D crossover and mutation techniques has been proposed. By using this method, the convergence speed of GA can be increased significantly compared to the conventional methods. Moreover, the 2D representation and crossover operation provides the designer with better visualization of the evolved circuits. In addition to this, a technique to display automatically the evolved circuits has been developed with the help of MATLAB

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Non-relativistic and relativistic self-consistent Hartree- Fock-Slater and Dirac-Slater models have been used to calculate one-electron energy levels and ionization energies for UF_5. The calculations were performed in an assumed structure of C_4v symmetry with the uranium atom at the center of mass of the molecule. The spacing and level ordering are compared with earlier results obtained with the MS X\alpha method using the muffin-tin approximation. Connections with the multiphoton isotope separation scheme of UF_6 are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Exercises and solutions in LaTex

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Exercises and solutions in PDF

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neurofuzzy modelling systems combine fuzzy logic with quantitative artificial neural networks via a concept of fuzzification by using a fuzzy membership function usually based on B-splines and algebraic operators for inference, etc. The paper introduces a neurofuzzy model construction algorithm using Bezier-Bernstein polynomial functions as basis functions. The new network maintains most of the properties of the B-spline expansion based neurofuzzy system, such as the non-negativity of the basis functions, and unity of support but with the additional advantages of structural parsimony and Delaunay input space partitioning, avoiding the inherent computational problems of lattice networks. This new modelling network is based on the idea that an input vector can be mapped into barycentric co-ordinates with respect to a set of predetermined knots as vertices of a polygon (a set of tiled Delaunay triangles) over the input space. The network is expressed as the Bezier-Bernstein polynomial function of barycentric co-ordinates of the input vector. An inverse de Casteljau procedure using backpropagation is developed to obtain the input vector's barycentric co-ordinates that form the basis functions. Extension of the Bezier-Bernstein neurofuzzy algorithm to n-dimensional inputs is discussed followed by numerical examples to demonstrate the effectiveness of this new data based modelling approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this letter, we consider beamforming strategies in amplified-and-forward (AF) two-way relay channels, where two terminals and the relay are equipped with multiple antennas. Our aim is to optimize the worse end-to-end signal-to-noise ratio of the two links so that the reliability of both terminals can be guaranteed. We show that the optimization problem can be recast as a generalized fractional programing and be solved by using the Dinkelbach-type procedure combined with semidefinite programming. Simulation results confirm the efficiency of the proposed strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An analysis method for diffusion tensor (DT) magnetic resonance imaging data is described, which, contrary to the standard method (multivariate fitting), does not require a specific functional model for diffusion-weighted (DW) signals. The method uses principal component analysis (PCA) under the assumption of a single fibre per pixel. PCA and the standard method were compared using simulations and human brain data. The two methods were equivalent in determining fibre orientation. PCA-derived fractional anisotropy and DT relative anisotropy had similar signal-to-noise ratio (SNR) and dependence on fibre shape. PCA-derived mean diffusivity had similar SNR to the respective DT scalar, and it depended on fibre anisotropy. Appropriate scaling of the PCA measures resulted in very good agreement between PCA and DT maps. In conclusion, the assumption of a specific functional model for DW signals is not necessary for characterization of anisotropic diffusion in a single fibre.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The immersed boundary method is a versatile tool for the investigation of flow-structure interaction. In a large number of applications, the immersed boundaries or structures are very stiff and strong tangential forces on these interfaces induce a well-known, severe time-step restriction for explicit discretizations. This excessive stability constraint can be removed with fully implicit or suitable semi-implicit schemes but at a seemingly prohibitive computational cost. While economical alternatives have been proposed recently for some special cases, there is a practical need for a computationally efficient approach that can be applied more broadly. In this context, we revisit a robust semi-implicit discretization introduced by Peskin in the late 1970s which has received renewed attention recently. This discretization, in which the spreading and interpolation operators are lagged. leads to a linear system of equations for the inter-face configuration at the future time, when the interfacial force is linear. However, this linear system is large and dense and thus it is challenging to streamline its solution. Moreover, while the same linear system or one of similar structure could potentially be used in Newton-type iterations, nonlinear and highly stiff immersed structures pose additional challenges to iterative methods. In this work, we address these problems and propose cost-effective computational strategies for solving Peskin`s lagged-operators type of discretization. We do this by first constructing a sufficiently accurate approximation to the system`s matrix and we obtain a rigorous estimate for this approximation. This matrix is expeditiously computed by using a combination of pre-calculated values and interpolation. The availability of a matrix allows for more efficient matrix-vector products and facilitates the design of effective iterative schemes. We propose efficient iterative approaches to deal with both linear and nonlinear interfacial forces and simple or complex immersed structures with tethered or untethered points. One of these iterative approaches employs a splitting in which we first solve a linear problem for the interfacial force and then we use a nonlinear iteration to find the interface configuration corresponding to this force. We demonstrate that the proposed approach is several orders of magnitude more efficient than the standard explicit method. In addition to considering the standard elliptical drop test case, we show both the robustness and efficacy of the proposed methodology with a 2D model of a heart valve. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper the stastistical properties of the real exchange rates of G-5 countries for the Bretton-Woods peiod, and draw implications on the purchasing power parity (PPP) hypothesis. In contrast to most previous studies that consider only unit root and stationary process to describe the real exchange tae, this paper also considers two in-between processes, the locally persistent process ans the fractionally integrated process, to complement past studies. Seeking to be consistent with tha ample evidence of near unit in the real exchange rate movements very well. This finding implies that: 1) the real exchange movement is more persistent than the stationary case but less persistent than the unit root case; 2) the real exchange rate is non-stationary but the PPP reversion occurs and the PPP holds in the long run; 3) the real exchange rate does not exhibit the secular dependence of the fractional integration; 4) the real exchange rate evolves over time in a way that there is persistence over a range of time, but the effect of shocks will eventually disappear over time horizon longer than order O (nd), that is, at finite time horizon; 5) shocks dissipation is fasters than predicted by the fractional integracion, and the total sum of the effects of a unit innovation is finite, implying that a full PPP reversion occurs at finite horizons. These results may explain why pasrt empirical estudies could not provide a clear- conclusion on the real exchange rate processes and the PPP hypothesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation studies the innovative technological capabilities available in the merger and acquisitions processes and the relationship between these processes with the technological capabilities accumulation to get convergence of technology and services. This study was examined in fourteen companies of the telecommunications industry during 2002 to 2007. Starting on 1990 there were from one end a profusion of studies on the technological capabilities as source of competitive advantages; from another end there are studies on merger and acquisitions with the objective to evaluate the motivations derived from technological factors and stimulation to the competition and the opening of the market. However few of the empirical studies of long stated period that examine the correlation of these events in the industry of telecommunications under the optics of the technological qualification in the level of the companies and for the strategic perspective of enterprise based on the dynamics abilities. An analytical framework already available in the literature was used to describe the contribution of the merger and acquisitions processes for the accumulation of innovative technological capabilities in the studied companies. However the framework was adapted specifically for the industry of Telecommunications. This dissertation also studies the importance of the strategic merger and acquisitions as organizational form in the complementation of technological capability for external sources. Such empirical evidences had been collected from information and data bases published for the own companies who had been examined in this dissertation. Regarding the results, it was found that: 1. In terms of participation with ingress technological capabilities in strategic merger and acquisitions the equipment manufacturers had entered with 71% to 55 of the technological capabilities and the service operator company had entered with 61% to 71 technological capabilities. 2. In terms of implications of the merger and acquisitions for the configuration of resultant technologic capabilities, it was found that the equipment manufacturers had increased 31% the ratio of convergence of technology and the operators of services had increased 4% the ratio for the change in the organizational structure. 3. Regarding the accumulation technological capability to obtain convergence of technology and services was verified the increase these technological capabilities after the merger and acquisitions process in the companies studied. Considering the limitation of this study, the evidences found in this dissertation suggest that the companies use the processes of strategic merger and acquisitions to search for external complementation of their knowledge base to compete in the globalization market. The result demonstrates that this movement has implied in an alteration and accumulation of capability from organization on innovative technological activities regarding the convergence of technology and services.