836 resultados para Research networks
Resumo:
This paper seeks to advance the theory and practice of the dynamics of complex networks in relation to direct and indirect citations. It applies social network analysis (SNA) and the ordered weighted averaging operator (OWA) to study a patent citations network. So far the SNA studies investigating long chains of patents citations have rarely been undertaken and the importance of a node in a network has been associated mostly with its number of direct ties. In this research OWA is used to analyse complex networks, assess the role of indirect ties, and provide guidance to reduce complexity for decision makers and analysts. An empirical example of a set of European patents published in 2000 in the renewable energy industry is provided to show the usefulness of the proposed approach for the preference ranking of patent citations.
Resumo:
It is consider the new global models for society of neuronet type. The hierarchical structure of society and mentality of individual are considered. The way for incorporating in model anticipatory (prognostic) ability of individual is considered. Some implementations of approach for real task and further research problems are described. Multivaluedness of models and solutions is discussed. Sensory-motor systems analogy also is discussed. New problems for theory and applications of neural networks are described.
Resumo:
* Supported by INTAS 00-626 and TIC 2003-09319-c03-03.
Resumo:
Neural Networks have been successfully employed in different biomedical settings. They have been useful for feature extractions from images and biomedical data in a variety of diagnostic applications. In this paper, they are applied as a diagnostic tool for classifying different levels of gastric electrical uncoupling in controlled acute experiments on dogs. Data was collected from 16 dogs using six bipolar electrodes inserted into the serosa of the antral wall. Each dog underwent three recordings under different conditions: (1) basal state, (2) mild surgically-induced uncoupling, and (3) severe surgically-induced uncoupling. For each condition half-hour recordings were made. The neural network was implemented according to the Learning Vector Quantization model. This is a supervised learning model of the Kohonen Self-Organizing Maps. Majority of the recordings collected from the dogs were used for network training. Remaining recordings served as a testing tool to examine the validity of the training procedure. Approximately 90% of the dogs from the neural network training set were classified properly. However, only 31% of the dogs not included in the training process were accurately diagnosed. The poor neural-network based diagnosis of recordings that did not participate in the training process might have been caused by inappropriate representation of input data. Previous research has suggested characterizing signals according to certain features of the recorded data. This method, if employed, would reduce the noise and possibly improve the diagnostic abilities of the neural network.
Resumo:
Chaos control is a concept that recently acquiring more attention among the research community, concerning the fields of engineering, physics, chemistry, biology and mathematic. This paper presents a method to simultaneous control of deterministic chaos in several nonlinear dynamical systems. A radial basis function networks (RBFNs) has been used to control chaotic trajectories in the equilibrium points. Such neural network improves results, avoiding those problems that appear in other control methods, being also efficient dealing with a relatively small random dynamical noise.
Resumo:
The present paper is devoted to creation of cryptographic data security and realization of the packet mode in the distributed information measurement and control system that implements methods of optical spectroscopy for plasma physics research and atomic collisions. This system gives a remote access to information and instrument resources within the Intranet/Internet networks. The system provides remote access to information and hardware resources for the natural sciences within the Intranet/Internet networks. The access to physical equipment is realized through the standard interface servers (PXI, CАМАC, and GPIB), the server providing access to Ethernet devices, and the communication server, which integrates the equipment servers into a uniform information system. The system is used to make research task in optical spectroscopy, as well as to support the process of education at the Department of Physics and Engineering of Petrozavodsk State University.
Resumo:
This research is focused on the optimisation of resource utilisation in wireless mobile networks with the consideration of the users’ experienced quality of video streaming services. The study specifically considers the new generation of mobile communication networks, i.e. 4G-LTE, as the main research context. The background study provides an overview of the main properties of the relevant technologies investigated. These include video streaming protocols and networks, video service quality assessment methods, the infrastructure and related functionalities of LTE, and resource allocation algorithms in mobile communication systems. A mathematical model based on an objective and no-reference quality assessment metric for video streaming, namely Pause Intensity, is developed in this work for the evaluation of the continuity of streaming services. The analytical model is verified by extensive simulation and subjective testing on the joint impairment effects of the pause duration and pause frequency. Various types of the video contents and different levels of the impairments have been used in the process of validation tests. It has been shown that Pause Intensity is closely correlated with the subjective quality measurement in terms of the Mean Opinion Score and this correlation property is content independent. Based on the Pause Intensity metric, an optimised resource allocation approach is proposed for the given user requirements, communication system specifications and network performances. This approach concerns both system efficiency and fairness when establishing appropriate resource allocation algorithms, together with the consideration of the correlation between the required and allocated data rates per user. Pause Intensity plays a key role here, representing the required level of Quality of Experience (QoE) to ensure the best balance between system efficiency and fairness. The 3GPP Long Term Evolution (LTE) system is used as the main application environment where the proposed research framework is examined and the results are compared with existing scheduling methods on the achievable fairness, efficiency and correlation. Adaptive video streaming technologies are also investigated and combined with our initiatives on determining the distribution of QoE performance across the network. The resulting scheduling process is controlled through the prioritization of users by considering their perceived quality for the services received. Meanwhile, a trade-off between fairness and efficiency is maintained through an online adjustment of the scheduler’s parameters. Furthermore, Pause Intensity is applied to act as a regulator to realise the rate adaptation function during the end user’s playback of the adaptive streaming service. The adaptive rates under various channel conditions and the shape of the QoE distribution amongst the users for different scheduling policies have been demonstrated in the context of LTE. Finally, the work for interworking between mobile communication system at the macro-cell level and the different deployments of WiFi technologies throughout the macro-cell is presented. A QoEdriven approach is proposed to analyse the offloading mechanism of the user’s data (e.g. video traffic) while the new rate distribution algorithm reshapes the network capacity across the macrocell. The scheduling policy derived is used to regulate the performance of the resource allocation across the fair-efficient spectrum. The associated offloading mechanism can properly control the number of the users within the coverages of the macro-cell base station and each of the WiFi access points involved. The performance of the non-seamless and user-controlled mobile traffic offloading (through the mobile WiFi devices) has been evaluated and compared with that of the standard operator-controlled WiFi hotspots.
Resumo:
Computer networks are a critical factor for the performance of a modern company. Managing networks is as important as managing any other aspect of the company’s performance and security. There are many tools and appliances for monitoring the traffic and analyzing the network flow security. They use different approaches and rely on a variety of characteristics of the network flows. Network researchers are still working on a common approach for security baselining that might enable early watch alerts. This research focuses on the network security models, particularly the Denial-of-Services (DoS) attacks mitigation, based on a network flow analysis using the flows measurements and the theory of Markov models. The content of the paper comprises the essentials of the author’s doctoral thesis.
Resumo:
Alzheimer's disease (AD) is the most common form of dementia, affecting more than 35 million people worldwide. Brain hypometabolism is a major feature of AD, appearing decades before cognitive decline and pathologic lesions. To date, the majority of studies on hypometabolism in AD have used transgenic animal models or imaging studies of the human brain. As it is almost impossible to validate these findings using human tissue, alternative models are required. In this study, we show that human stem cell-derived neuron and astrocyte cultures treated with oligomers of amyloid beta 1-42 (Aβ1-42) also display a clear hypometabolism, particularly with regard to utilization of substrates such as glucose, pyruvate, lactate, and glutamate. In addition, a significant increase in the glycogen content of cells was also observed. These changes were accompanied by changes in NAD+ /NADH, ATP, and glutathione levels, suggesting a disruption in the energy-redox axis within these cultures. The high energy demands associated with neuronal functions such as memory formation and protection from oxidative stress put these cells at particular risk from Aβ-induced hypometabolism. Further research using this model may elucidate the mechanisms associated with Aβ-induced hypometabolism.
Resumo:
This paper presents for the first time the concept of measurement assisted assembly (MAA) and outlines the research priorities of the realisation of this concept in the industry. MAA denotes a paradigm shift in assembly for high value and complex products and encompasses the development and use of novel metrology processes for the holistic integration and capability enhancement of key assembly and ancillary processes. A complete framework for MAA is detailed showing how this can facilitate a step change in assembly process capability and efficiency for large and complex products, such as airframes, where traditional assembly processes exhibit the requirement for rectification and rework, use inflexible tooling and are largely manual, resulting in cost and cycle time pressures. The concept of MAA encompasses a range of innovativemeasurement- assisted processes which enable rapid partto- part assembly, increased use of flexible automation, traceable quality assurance and control, reduced structure weight and improved levels of precision across the dimensional scales. A full scale industrial trial of MAA technologies has been carried out on an experimental aircraft wing demonstrating the viability of the approach while studies within 140 smaller companies have highlighted the need for better adoption of existing process capability and quality control standards. The identified research priorities for MAA include the development of both frameless and tooling embedded automated metrology networks. Other research priorities relate to the development of integrated dimensional variation management, thermal compensation algorithms as well as measurement planning and inspection of algorithms linking design to measurement and process planning. © Springer-Verlag London 2013.
Resumo:
In recent years, the rapid spread of smartphones has led to the increasing popularity of Location-Based Social Networks (LBSNs). Although a number of research studies and articles in the press have shown the dangers of exposing personal location data, the inherent nature of LBSNs encourages users to publish information about their current location (i.e., their check-ins). The same is true for the majority of the most popular social networking websites, which offer the possibility of associating the current location of users to their posts and photos. Moreover, some LBSNs, such as Foursquare, let users tag their friends in their check-ins, thus potentially releasing location information of individuals that have no control over the published data. This raises additional privacy concerns for the management of location information in LBSNs. In this paper we propose and evaluate a series of techniques for the identification of users from their check-in data. More specifically, we first present two strategies according to which users are characterized by the spatio-temporal trajectory emerging from their check-ins over time and the frequency of visit to specific locations, respectively. In addition to these approaches, we also propose a hybrid strategy that is able to exploit both types of information. It is worth noting that these techniques can be applied to a more general class of problems where locations and social links of individuals are available in a given dataset. We evaluate our techniques by means of three real-world LBSNs datasets, demonstrating that a very limited amount of data points is sufficient to identify a user with a high degree of accuracy. For instance, we show that in some datasets we are able to classify more than 80% of the users correctly.
Resumo:
In this work, different artificial neural networks (ANN) are developed for the prediction of surface roughness (R a) values in Al alloy 7075-T7351 after face milling machining process. The radial base (RBNN), feed forward (FFNN), and generalized regression (GRNN) networks were selected, and the data used for training these networks were derived from experiments conducted using a high-speed milling machine. The Taguchi design of experiment was applied to reduce the time and cost of the experiments. From this study, the performance of each ANN used in this research was measured with the mean square error percentage and it was observed that FFNN achieved the best results. Also the Pearson correlation coefficient was calculated to analyze the correlation between the five inputs (cutting speed, feed per tooth, axial depth of cut, chip°s width, and chip°s thickness) selected for the network with the selected output (surface roughness). Results showed a strong correlation between the chip thickness and the surface roughness followed by the cutting speed. © ASM International.
Resumo:
The main focus of this paper is on mathematical theory and methods which have a direct bearing on problems involving multiscale phenomena. Modern technology is refining measurement and data collection to spatio-temporal scales on which observed geophysical phenomena are displayed as intrinsically highly variable and intermittant heirarchical structures,e.g. rainfall, turbulence, etc. The heirarchical structure is reflected in the occurence of a natural separation of scales which collectively manifest at some basic unit scale. Thus proper data analysis and inference require a mathematical framework which couples the variability over multiple decades of scale in which basic theoretical benchmarks can be identified and calculated. This continues the main theme of the research in this area of applied probability over the past twenty years.
Resumo:
Recent poverty research focuses on the household responses to poverty through structure vs. agency perspectives. The human agency perspective, however, provides us important insights for looking beyond these simplistic tendencies which assume poor people as inherently passive, or envision them as helpless victims. In Turkey, politicians view poverty as a temporary and manageable problem which can be dealt with the provision of more charity or community support. Migrant networks, informal sector work and social assistance are considered to be important mechanisms that would provide resources for the poor. This paper argues that for some of the poor households none of these mechanisms provide sufficient resources. Instead, neighbourhood-based small-group solidarities and self-help networks enable those poor to develop collective capabilities and make ends meet. The paper also reveals that in Turkey, the implementation of social policies for poverty reduction could bring about relationships of patronage and in some cases contribute to existing inequalities.
Resumo:
This issue of Philosophical Transactions of the Royal Society, Part A represents a summary of the recent discussion meeting 'Communication networks beyond the capacity crunch'. The purpose of the meeting was to establish the nature of the capacity crunch, estimate the time scales associated with it and to begin to find solutions to enable continued growth in a post-crunch era. The meeting confirmed that, in addition to a capacity shortage within a single optical fibre, many other 'crunches' are foreseen in the field of communications, both societal and technical. Technical crunches identified included the nonlinear Shannon limit, wireless spectrum, distribution of 5G signals (front haul and back haul), while societal influences included net neutrality, creative content generation and distribution and latency, and finally energy and cost. The meeting concluded with the observation that these many crunches are genuine and may influence our future use of technology, but encouragingly noted that research and business practice are already moving to alleviate many of the negative consequences.