876 resultados para Many-Valued Intellectual System


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Research and development of mathematical model of optimum distribution of resources (basically financial) for maintenance of the new (raised) quality (reliability) of complex system concerning, which the decision on its re-structuring is accepted, is stated. The final model gives answers (algorithm of calculation) to questions: how many elements of system to allocate on modernization, which elements, up to what level of depth modernization of each of allocated is necessary, and optimum answers are by criterion of minimization of financial charges.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

* This paper has supported by Far Eastern Branch of the Russian Academy of Sciences, the project 06-III-A-01-005 and Russian Fund of Fundamental Investigation, the project 06-07-89071-a

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reconfigurable platforms are a promising technology that offers an interesting trade-off between flexibility and performance, which many recent embedded system applications demand, especially in fields such as multimedia processing. These applications typically involve multiple ad-hoc tasks for hardware acceleration, which are usually represented using formalisms such as Data Flow Diagrams (DFDs), Data Flow Graphs (DFGs), Control and Data Flow Graphs (CDFGs) or Petri Nets. However, none of these models is able to capture at the same time the pipeline behavior between tasks (that therefore can coexist in order to minimize the application execution time), their communication patterns, and their data dependencies. This paper proves that the knowledge of all this information can be effectively exploited to reduce the resource requirements and the timing performance of modern reconfigurable systems, where a set of hardware accelerators is used to support the computation. For this purpose, this paper proposes a novel task representation model, named Temporal Constrained Data Flow Diagram (TCDFD), which includes all this information. This paper also presents a mapping-scheduling algorithm that is able to take advantage of the new TCDFD model. It aims at minimizing the dynamic reconfiguration overhead while meeting the communication requirements among the tasks. Experimental results show that the presented approach achieves up to 75% of resources saving and up to 89% of reconfiguration overhead reduction with respect to other state-of-the-art techniques for reconfigurable platforms.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This chapter contributes to the anthology on learning to research - researching to learn because it emphases a need to design curricula that enables living research, and on-going researcher development, rather than one that restricts student and staff activities, within a marketised approach towards time. In recent decades higher education (HE) has come to be valued for its contribution to the global economy. Referred to as the neo-liberal university, a strong prioritisation has been placed on meeting the needs of industry by providing a better workforce. This perspective emphasises the role of a degree in HE to secure future material affluence, rather than to study as an on-going investment in the self (Molesworth , Nixon & Scullion, 2009: 280). Students are treated primarily as consumers in this model, where through their tuition fees they purchase a product, rather than benefit from the transformative potential university education offers for the whole of life.Given that HE is now measured by the numbers of students it attracts, and later places into well-paid jobs, there is an intense pressure on time, which has led to a method where the learning experiences of students are broken down into discrete modules. Whilst this provides consistency, students can come to view research processes in a fragmented way within the modular system. Topics are presented chronologically, week-by-week and students simply complete a set of tasks to ‘have a degree’, rather than to ‘be learners’ (Molesworth , Nixon & Scullion, 2009: 277) who are living their research, in relation to their own past, present and future. The idea of living research in this context is my own adaptation of an approach suggested by C. Wright Mills (1959) in The Sociological Imagination. Mills advises that successful scholars do not split their work from the rest of their lives, but treat scholarship as a choice of how to live, as well as a choice of career. The marketised slant in HE thus creates a tension firstly, for students who are learning to research. Mills would encourage them to be creative, not instrumental, in their use of time, yet they are journeying through a system that is structured for a swift progression towards a high paid job, rather than crafted for reflexive inquiry, that transforms their understanding throughout life. Many universities are placing a strong focus on discrete skills for student employability, but I suggest that embedding the transformative skills emphasised by Mills empowers students and builds their confidence to help them make connections that aid their employability. Secondly, the marketised approach creates a problem for staff designing the curriculum, if students do not easily make links across time over their years of study and whole programmes. By researching to learn, staff can discover new methods to apply in their design of the curriculum, to help students make important and creative connections across their programmes of study.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

3rd Workshop on High-performance and Real-time Embedded Systems (HIRES 2015). 21, Jan, 2015. Amsterdam, Netherlands.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This research report presents an application of systems theory to evaluating intellectual capital (IC) as organization's ability for self-renewal. As renewal ability is a dynamic capability of an organization as a whole, rather than a static asset or an atomistic competence of separate individuals within the organization, it needs to be understood systemically. Consequently, renewal ability has to be measured with systemic methods that are based on a thorough conceptual analysis of systemic characteristics of organizations. The aim of this report is to demonstrate the theory and analysis methodology for grasping companies' systemic efficiency and renewal ability. The volume is divided into three parts. The first deals with the theory of organizations as self-renewing systems. In the second part, the principles of quantitative analysis of organizations are laid down. Finally, the detailed mathematics of the renewal indices are presented. We also assert that the indices produced by the analysis are an effective tool for the management and valuation of knowledge-intensive companies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Through advances in technology, System-on-Chip design is moving towards integrating tens to hundreds of intellectual property blocks into a single chip. In such a many-core system, on-chip communication becomes a performance bottleneck for high performance designs. Network-on-Chip (NoC) has emerged as a viable solution for the communication challenges in highly complex chips. The NoC architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication challenges such as wiring complexity, communication latency, and bandwidth. Furthermore, the combined benefits of 3D IC and NoC schemes provide the possibility of designing a high performance system in a limited chip area. The major advantages of 3D NoCs are the considerable reductions in average latency and power consumption. There are several factors degrading the performance of NoCs. In this thesis, we investigate three main performance-limiting factors: network congestion, faults, and the lack of efficient multicast support. We address these issues by the means of routing algorithms. Congestion of data packets may lead to increased network latency and power consumption. Thus, we propose three different approaches for alleviating such congestion in the network. The first approach is based on measuring the congestion information in different regions of the network, distributing the information over the network, and utilizing this information when making a routing decision. The second approach employs a learning method to dynamically find the less congested routes according to the underlying traffic. The third approach is based on a fuzzy-logic technique to perform better routing decisions when traffic information of different routes is available. Faults affect performance significantly, as then packets should take longer paths in order to be routed around the faults, which in turn increases congestion around the faulty regions. We propose four methods to tolerate faults at the link and switch level by using only the shortest paths as long as such path exists. The unique characteristic among these methods is the toleration of faults while also maintaining the performance of NoCs. To the best of our knowledge, these algorithms are the first approaches to bypassing faults prior to reaching them while avoiding unnecessary misrouting of packets. Current implementations of multicast communication result in a significant performance loss for unicast traffic. This is due to the fact that the routing rules of multicast packets limit the adaptivity of unicast packets. We present an approach in which both unicast and multicast packets can be efficiently routed within the network. While suggesting a more efficient multicast support, the proposed approach does not affect the performance of unicast routing at all. In addition, in order to reduce the overall path length of multicast packets, we present several partitioning methods along with their analytical models for latency measurement. This approach is discussed in the context of 3D mesh networks.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

To describe the time dependence of an atomic collision system the Dirac equation usually is rewritten in a coupled channel equation. We first discuss part of the approximation used in this approach and the connection of the many particle with the one particle interpretation. The coupled channel equations are solved for the system F{^8+} - Ne using static selfconsistent many electron Dirac-Fock-Slater wavefunctions as basis. The resulting P(b) curves for the creation of a Ne K-hole are in reasonable agreement with the experimental results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Limnologists had an early preoccupation with lake classification. It gave a necessary structure to the many chemical and biological observations that were beginning to form the basis of one of the earliest truly environmental sciences. August Thienemann was the doyen of such classifiers and his concept with Einar Naumann of oligotrophic and eutrophic lakes remains central to the world-view that limnologists still have. Classification fell into disrepute, however, as it became clear that there would always be lakes that deviated from the prescriptions that the classifiers made for them. Continua became the de rigeur concept and lakes were seen as varying along many chemical, biological and geographic axes. Modern limnologists are comfortable with this concept. That all lakes are different guarantees an indefinite future for limnological research. For those who manage lakes and the landscapes in which they are set, however, it is not very useful. There may be as many as 300000 standing water bodies in England and Wales alone and maybe as many again in Scotland. More than 80 000 are sizable (> 1 ha). Some classification scheme to cope with these numbers is needed and, as human impacts on them increase, a system of assessing and monitoring change must be built into such a scheme. Although ways of classifying and monitoring running waters are well developed in the UK, the same is not true of standing waters. Sufficient understanding of what determines the nature and functioning of lakes exists to create a system which has intellectual credibility as well as practical usefulness. This paper outlines the thinking behind a system which will be workable on a north European basis and presents some early results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many communication signal processing applications involve modelling and inverting complex-valued (CV) Hammerstein systems. We develops a new CV B-spline neural network approach for efficient identification of the CV Hammerstein system and effective inversion of the estimated CV Hammerstein model. Specifically, the CV nonlinear static function in the Hammerstein system is represented using the tensor product from two univariate B-spline neural networks. An efficient alternating least squares estimation method is adopted for identifying the CV linear dynamic model’s coefficients and the CV B-spline neural network’s weights, which yields the closed-form solutions for both the linear dynamic model’s coefficients and the B-spline neural network’s weights, and this estimation process is guaranteed to converge very fast to a unique minimum solution. Furthermore, an accurate inversion of the CV Hammerstein system can readily be obtained using the estimated model. In particular, the inversion of the CV nonlinear static function in the Hammerstein system can be calculated effectively using a Gaussian-Newton algorithm, which naturally incorporates the efficient De Boor algorithm with both the B-spline curve and first order derivative recursions. The effectiveness of our approach is demonstrated using the application to equalisation of Hammerstein channels.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Drawing upon Brazilian experience, this research explores some of the key issues to be addressed in using e-government technical cooperation designed to enhance service provision of Patent Offices in developing countries. While the development of software applications is often seen merely as a technical engineering exercise, localization and adaptation are context bounded matters that are characterized by many entanglements of human and non-humans. In this work, technical, legal and policy implications of technical cooperation are also discussed in a complex and dynamic implementation environment characterized by the influence of powerful hidden agendas associated with the arena of intellectual property (IP), which are shaped by recent technological, economic and social developments in our current knowledge-based economy. This research employs two different theoretical lenses to examine the same case, which consists of transfer of a Patent Management System (PMS) from the European Patent Office (EPO) to the Brazilian Patent Office that is locally named ‘Instituto Nacional da Propriedade Industrial’ (INPI). Fundamentally, we have opted for a multi-paper thesis comprising an introduction, three scientific articles and a concluding chapter that discusses and compares the insights obtained from each article. The first article is dedicated to present an extensive literature review on e-government and technology transfer. This review allowed the proposition on an integrative meta-model of e-government technology transfer, which is named E-government Transfer Model (ETM). Subsequently, in the second article, we present Actor-Network Theory (ANT) as a framework for understanding the processes of transferring e-government technologies from Patent Offices in developed countries to Patent Offices in developing countries. Overall, ANT is seen as having a potentially wide area of application and being a promising theoretical vehicle in IS research to carry out a social analysis of messy and heterogeneous processes that drive technical change. Drawing particularly on the works of Bruno Latour, Michel Callon and John Law, this work applies this theory to a longitudinal study of the management information systems supporting the Brazilian Patent Office restructuration plan that involved the implementation of a European Patent Management System in Brazil. Based upon the ANT elements, we follow the actors to identify and understand patterns of group formation associated with the technical cooperation between the Brazilian Patent Office (INPI) and the European Patent Office (EPO). Therefore, this research explores the intricate relationships and interactions between human and non-human actors in their attempts to construct various network alliances, thereby demonstrating that technologies embodies compromise. Finally, the third article applies ETM model as a heuristic frame to examine the same case previously studied from an ANT perspective. We have found evidence that ETM has strong heuristic qualities that can guide practitioners who are engaged in the transfer of e-government systems from developed to developing countries. The successful implementation of e-government projects in developing countries is important to stimulate economic growth and, as a result, we need to understand the processes through which such projects are being implemented and succeed. Here, we attempt to improve understanding on the development and stabilization of a complex social-technical system in the arena of intellectual property. Our preliminary findings suggest that e-government technology transfer is an inherently political process and that successful outcomes require continuous incremental actions and improvisations to address the ongoing issues as they emerge.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We consider a four-parameter family of point interactions in one dimension. This family is a generalization of the usual delta-function potential. We examine a system consisting of many particles of equal masses that are interacting pairwise through such a generalized point interaction. We follow McGuire who obtained exact solutions for the system when the interaction is the delta-function potential. We find exact bound states with the four-parameter family. For the scattering problem, however, we have not been so successful. This is because, as we point out, the condition of no diffraction that is crucial in McGuire's method is nor satisfied except when the four-parameter family is essentially reduced to the delta-function potential.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Im Forschungsgebiet der Künstlichen Intelligenz, insbesondere im Bereich des maschinellen Lernens, hat sich eine ganze Reihe von Verfahren etabliert, die von biologischen Vorbildern inspiriert sind. Die prominentesten Vertreter derartiger Verfahren sind zum einen Evolutionäre Algorithmen, zum anderen Künstliche Neuronale Netze. Die vorliegende Arbeit befasst sich mit der Entwicklung eines Systems zum maschinellen Lernen, das Charakteristika beider Paradigmen in sich vereint: Das Hybride Lernende Klassifizierende System (HCS) wird basierend auf dem reellwertig kodierten eXtended Learning Classifier System (XCS), das als Lernmechanismus einen Genetischen Algorithmus enthält, und dem Wachsenden Neuralen Gas (GNG) entwickelt. Wie das XCS evolviert auch das HCS mit Hilfe eines Genetischen Algorithmus eine Population von Klassifizierern - das sind Regeln der Form [WENN Bedingung DANN Aktion], wobei die Bedingung angibt, in welchem Bereich des Zustandsraumes eines Lernproblems ein Klassifizierer anwendbar ist. Beim XCS spezifiziert die Bedingung in der Regel einen achsenparallelen Hyperquader, was oftmals keine angemessene Unterteilung des Zustandsraumes erlaubt. Beim HCS hingegen werden die Bedingungen der Klassifizierer durch Gewichtsvektoren beschrieben, wie die Neuronen des GNG sie besitzen. Jeder Klassifizierer ist anwendbar in seiner Zelle der durch die Population des HCS induzierten Voronoizerlegung des Zustandsraumes, dieser kann also flexibler unterteilt werden als beim XCS. Die Verwendung von Gewichtsvektoren ermöglicht ferner, einen vom Neuronenadaptationsverfahren des GNG abgeleiteten Mechanismus als zweites Lernverfahren neben dem Genetischen Algorithmus einzusetzen. Während das Lernen beim XCS rein evolutionär erfolgt, also nur durch Erzeugen neuer Klassifizierer, ermöglicht dies dem HCS, bereits vorhandene Klassifizierer anzupassen und zu verbessern. Zur Evaluation des HCS werden mit diesem verschiedene Lern-Experimente durchgeführt. Die Leistungsfähigkeit des Ansatzes wird in einer Reihe von Lernproblemen aus den Bereichen der Klassifikation, der Funktionsapproximation und des Lernens von Aktionen in einer interaktiven Lernumgebung unter Beweis gestellt.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Earth observations (EO) represent a growing and valuable resource for many scientific, research and practical applications carried out by users around the world. Access to EO data for some applications or activities, like climate change research or emergency response activities, becomes indispensable for their success. However, often EO data or products made of them are (or are claimed to be) subject to intellectual property law protection and are licensed under specific conditions regarding access and use. Restrictive conditions on data use can be prohibitive for further work with the data. Global Earth Observation System of Systems (GEOSS) is an initiative led by the Group on Earth Observations (GEO) with the aim to provide coordinated, comprehensive, and sustained EO and information for making informed decisions in various areas beneficial to societies, their functioning and development. It seeks to share data with users world-wide with the fewest possible restrictions on their use by implementing GEOSS Data Sharing Principles adopted by GEO. The Principles proclaim full and open exchange of data shared within GEOSS, while recognising relevant international instruments and national policies and legislation through which restrictions on the use of data may be imposed.The paper focuses on the issue of the legal interoperability of data that are shared with varying restrictions on use with the aim to explore the options of making data interoperable. The main question it addresses is whether the public domain or its equivalents represent the best mechanism to ensure legal interoperability of data. To this end, the paper analyses legal protection regimes and their norms applicable to EO data. Based on the findings, it highlights the existing public law statutory, regulatory, and policy approaches, as well as private law instruments, such as waivers, licenses and contracts, that may be used to place the datasets in the public domain, or otherwise make them publicly available for use and re-use without restrictions. It uses GEOSS and the particular characteristics of it as a system to identify the ways to reconcile the vast possibilities it provides through sharing of data from various sources and jurisdictions on the one hand, and the restrictions on the use of the shared resources on the other. On a more general level the paper seeks to draw attention to the obstacles and potential regulatory solutions for sharing factual or research data for the purposes that go beyond research and education.