983 resultados para Solution techniques
Resumo:
Internet traffic classification is a relevant and mature research field, anyway of growing importance and with still open technical challenges, also due to the pervasive presence of Internet-connected devices into everyday life. We claim the need for innovative traffic classification solutions capable of being lightweight, of adopting a domain-based approach, of not only concentrating on application-level protocol categorization but also classifying Internet traffic by subject. To this purpose, this paper originally proposes a classification solution that leverages domain name information extracted from IPFIX summaries, DNS logs, and DHCP leases, with the possibility to be applied to any kind of traffic. Our proposed solution is based on an extension of Word2vec unsupervised learning techniques running on a specialized Apache Spark cluster. In particular, learning techniques are leveraged to generate word-embeddings from a mixed dataset composed by domain names and natural language corpuses in a lightweight way and with general applicability. The paper also reports lessons learnt from our implementation and deployment experience that demonstrates that our solution can process 5500 IPFIX summaries per second on an Apache Spark cluster with 1 slave instance in Amazon EC2 at a cost of $ 3860 year. Reported experimental results about Precision, Recall, F-Measure, Accuracy, and Cohen's Kappa show the feasibility and effectiveness of the proposal. The experiments prove that words contained in domain names do have a relation with the kind of traffic directed towards them, therefore using specifically trained word embeddings we are able to classify them in customizable categories. We also show that training word embeddings on larger natural language corpuses leads improvements in terms of precision up to 180%.
Resumo:
An Australian natural zeolite was collected, characterised and employed for basic dye adsorption in aqueous solution. The natural zeolite is mainly composed of clinoptiloite, quartz and mordenite and has cation-exchange capacity of 120 meq/100 g. The natural zeolite presents higher adsorption capacity for methylene blue than rhodamine B with the maximal adsorption capacity of 2.8 x 10(-5) and 7.9 x 10(-5) Mot/g at 50 degrees C for rhodamine B and methylene blue, respectively. Kinetic studies indicated that the adsorption followed the pseudo second-order kinetics and could be described as two-stage diffusion process. The adsorption isotherm could be fitted by the Langmuir and Freundlich models. Thermodynamic calculations showed that the adsorption is endothermic process with Delta H degrees at 2.0 and 8.7 kJ/mol for rhodamine B and methylene blue. It has also found that the regenerated zeolites by high-temperature calcination and Fenton oxidation showed similar adsorption capacity but lower than the fresh sample. Only 60% capacity could be recovered by the two regeneration techniques. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Purpose: Tissue Doppler strain rate imaging (SRI) have been validated and applied in various clinical settings, but the clinical use of this modality is still limited due to time-consuming postprocessing, unfavorable signal to noise ratio and major angle dependency of image acquisition. 2D Strain (2DS) measures strain parameters through automated tissue tracking (Lagrangian strain) rather than tissue velocity regression. We sought to compare the accuracy of this technique with SRI and evaluate whether it overcomes the above limitations. Methods: We assessed 26 patients (13 female, age 60±5yrs) at low risk of CAD and with normal DSE at both baseline and peak stress. End systolic strain (ESS), peak systolic strain rate (SR), and timing parameters were measured by two independent observers using SRI and 2D Strain. Myocardial segments were excluded from the analyses if the insonation angle exceeded 30 degrees or if the segments were not visualized; 417 segments were evaluated. Results: Normal ranges for TVI and CEB approaches were comparable for SR (-0.99 ± 0.39 vs -0.88 ± 0.36, p=NS), ESS (-15.1 ± 6.5 vs -14.9 ± 6.3, p=NS), time to end of systole (174 ± 47 vs 174 ± 53, p=NS) and time to peak SR (TTP; 340 ± 34 vs 375 ± 57). The best correlations between the techniques were for time to end systole (rest r=0.6, p
Resumo:
In this thesis, we consider four different scenarios of interest in modern satellite communications. For each scenario, we will propose the use of advanced solutions aimed at increasing the spectral efficiency of the communication links. First, we will investigate the optimization of the current standard for digital video broadcasting. We will increase the symbol rate of the signal and determine the optimal signal bandwidth. We will apply the time packing technique and propose a specifically design constellation. We will then compare some receiver architectures with different performance and complexity. The second scenario still addresses broadcast transmissions, but in a network composed of two satellites. We will compare three alternative transceiver strategies, namely, signals completely overlapped in frequency, frequency division multiplexing, and the Alamouti space-time block code, and, for each technique, we will derive theoretical results on the achievable rates. We will also evaluate the performance of said techniques in three different channel models. The third scenario deals with the application of multiuser detection in multibeam satellite systems. We will analyze a case in which the users are near the edge of the coverage area and, hence, they experience a high level of interference from adjacent cells. Also in this case, three different approaches will be compared. A classical approach in which each beam carries information for a user, a cooperative solution based on time division multiplexing, and the Alamouti scheme. The information theoretical analysis will be followed by the study of practical coded schemes. We will show that the theoretical bounds can be approached by a properly designed code or bit mapping. Finally, we will consider an Earth observation scenario, in which data is generated on the satellite and then transmitted to the ground. We will study two channel models, taking into account one or two transmit antennas, and apply techniques such as time and frequency packing, signal predistortion, multiuser detection and the Alamouti scheme.
Resumo:
Magnetoencephalography (MEG) is a non-invasive brain imaging technique with the potential for very high temporal and spatial resolution of neuronal activity. The main stumbling block for the technique has been that the estimation of a neuronal current distribution, based on sensor data outside the head, is an inverse problem with an infinity of possible solutions. Many inversion techniques exist, all using different a-priori assumptions in order to reduce the number of possible solutions. Although all techniques can be thoroughly tested in simulation, implicit in the simulations are the experimenter's own assumptions about realistic brain function. To date, the only way to test the validity of inversions based on real MEG data has been through direct surgical validation, or through comparison with invasive primate data. In this work, we constructed a null hypothesis that the reconstruction of neuronal activity contains no information on the distribution of the cortical grey matter. To test this, we repeatedly compared rotated sections of grey matter with a beamformer estimate of neuronal activity to generate a distribution of mutual information values. The significance of the comparison between the un-rotated anatomical information and the electrical estimate was subsequently assessed against this distribution. We found that there was significant (P < 0.05) anatomical information contained in the beamformer images across a number of frequency bands. Based on the limited data presented here, we can say that the assumptions behind the beamformer algorithm are not unreasonable for the visual-motor task investigated.
Resumo:
The advent of personal communication systems within the last decade has depended upon the utilization of advanced digital schemes for source and channel coding and for modulation. The inherent digital nature of the communications processing has allowed the convenient incorporation of cryptographic techniques to implement security in these communications systems. There are various security requirements, of both the service provider and the mobile subscriber, which may be provided for in a personal communications system. Such security provisions include the privacy of user data, the authentication of communicating parties, the provision for data integrity, and the provision for both location confidentiality and party anonymity. This thesis is concerned with an investigation of the private-key and public-key cryptographic techniques pertinent to the security requirements of personal communication systems and an analysis of the security provisions of Second-Generation personal communication systems is presented. Particular attention has been paid to the properties of the cryptographic protocols which have been employed in current Second-Generation systems. It has been found that certain security-related protocols implemented in the Second-Generation systems have specific weaknesses. A theoretical evaluation of these protocols has been performed using formal analysis techniques and certain assumptions made during the development of the systems are shown to contribute to the security weaknesses. Various attack scenarios which exploit these protocol weaknesses are presented. The Fiat-Sharmir zero-knowledge cryptosystem is presented as an example of how asymmetric algorithm cryptography may be employed as part of an improved security solution. Various modifications to this cryptosystem have been evaluated and their critical parameters are shown to be capable of being optimized to suit a particular applications. The implementation of such a system using current smart card technology has been evaluated.
Resumo:
Purpose: The use of PHMB as a disinfectant in contact lens multipurpose solutions has been at the centre of much debate in recent times, particularly in relation to the issue of solution induced corneal staining. Clinical studies have been carried out which suggest different effects with individual contact lens materials used in combination with specific PHMB containing care regimes. There does not appear to be, however, a reliable analytical technique that would detect and quantify with any degree of accuracy the specific levels of PHMB that are taken up and released from individual solutions by the various contact lens materials. Methods: PHMB is a mixture of positively charged polymer units of varying molecular weight that has maximum absorbance wavelength of 236 nm. On the basis of these properties a range of assays including capillary electrophoresis, HPLC, a nickelnioxime colorimetric technique, mass spectrophotometry, UV spectroscopy and ion chromatography were assessed paying particular attention to each of their constraints and detection levels. Particular interest was focused on the relative advantage of contactless conductivity compared to UV and mass spectrometry detection in capillary electrophoresis (CE). This study provides an overview of the comparative performance of these techniques. Results: The UV absorbance of PHMB solutions, ranging from 0.0625 to 50 ppm was measured at 236 nm. Within this range the calibration curve appears to be linear however, absorption values below 1 ppm (0.0001%) were extremely difficult to reproduce. The concentration of PHMB in solutions is in the range of 0.0002–0.00005% and our investigations suggest that levels of PHMB below 0.0001% (levels encountered in uptake and release studies) can not be accurately estimated, in particular when analysing complex lens care solutions which can contain competitively absorbing, and thus interfering, species in the solution. The use of separative methodologies, such as CE using UV detection alone is similarly limited. Alternative techniques including contactless conductivity detection offer greater discrimination in complex solutions together with the opportunity for dual channel detection. Preliminary results achieved by TraceDec1 contactless conductivity detection, (Gain 150%, Offset 150) in conjunction with the Agilent capillary electrophoresis system using a bare fused silica capillary (extended light path, 50 mid, total length 64.5 cm, effective length 56 cm) and a cationic buffer at pH 3.2, exhibit great potential with reproducible PHMB split peaks. Conclusions: PHMB-based solutions are commonly associated with the potential to invoke corneal staining in combination with certain contact lens materials. However this terminology ‘PHMBbased solution’ is used primarily because PHMB itself has yet to be adequately implicated as the causative agent of the staining and compromised corneal cell integrity. The lack of well characterised adequately sensitive assays, coupled with the range of additional components that characterise individual care solutions pose a major barrier to the investigation of PHMB interactions in the lenswearing eye.
Resumo:
The first part of the thesis compares Roth's method with other methods, in particular the method of separation of variables and the finite cosine transform method, for solving certain elliptic partial differential equations arising in practice. In particular we consider the solution of steady state problems associated with insulated conductors in rectangular slots. Roth's method has two main disadvantages namely the slow rate of convergence of the double Fourier series and the restrictive form of the allowable boundary conditions. A combined Roth-separation of variables method is derived to remove the restrictions on the form of the boundary conditions and various Chebyshev approximations are used to try to improve the rate of convergence of the series. All the techniques are then applied to the Neumann problem arising from balanced rectangular windings in a transformer window. Roth's method is then extended to deal with problems other than those resulting from static fields. First we consider a rectangular insulated conductor in a rectangular slot when the current is varying sinusoidally with time. An approximate method is also developed and compared with the exact method.The approximation is then used to consider the problem of an insulated conductor in a slot facing an air gap. We also consider the exact method applied to the determination of the eddy-current loss produced in an isolated rectangular conductor by a transverse magnetic field varying sinusoidally with time. The results obtained using Roth's method are critically compared with those obtained by other authors using different methods. The final part of the thesis investigates further the application of Chebyshdev methods to the solution of elliptic partial differential equations; an area where Chebyshev approximations have rarely been used. A poisson equation with a polynomial term is treated first followed by a slot problem in cylindrical geometry.
Using interior point algorithms for the solution of linear programs with special structural features
Resumo:
Linear Programming (LP) is a powerful decision making tool extensively used in various economic and engineering activities. In the early stages the success of LP was mainly due to the efficiency of the simplex method. After the appearance of Karmarkar's paper, the focus of most research was shifted to the field of interior point methods. The present work is concerned with investigating and efficiently implementing the latest techniques in this field taking sparsity into account. The performance of these implementations on different classes of LP problems is reported here. The preconditional conjugate gradient method is one of the most powerful tools for the solution of the least square problem, present in every iteration of all interior point methods. The effect of using different preconditioners on a range of problems with various condition numbers is presented. Decomposition algorithms has been one of the main fields of research in linear programming over the last few years. After reviewing the latest decomposition techniques, three promising methods were chosen the implemented. Sparsity is again a consideration and suggestions have been included to allow improvements when solving problems with these methods. Finally, experimental results on randomly generated data are reported and compared with an interior point method. The efficient implementation of the decomposition methods considered in this study requires the solution of quadratic subproblems. A review of recent work on algorithms for convex quadratic was performed. The most promising algorithms are discussed and implemented taking sparsity into account. The related performance of these algorithms on randomly generated separable and non-separable problems is also reported.
Resumo:
The investigations described in this thesis concern the molecular interactions between polar solute molecules and various aromatic compounds in solution. Three different physical methods were employed. Nuclear magnetic resonance (n.m.r.) spectroscopy was used to determine the nature and strength of the interactions and the geometry of the transient complexes formed. Cryoscopic studies were used to provide information on the stoichiometry of the complexes. Dielectric constant studies were conducted in an attempt to confirm and supplement the spectroscopic investigations. The systems studied were those between nitromethane, chloroform, acetonitrile (solutes) and various methyl substituted benzenes. In the n.m.r. work the dependence of the solute chemical shift upon the compositions of the solutions was determined. From this the equilibrium quotients (K) for the formation of each complex and the shift induced in the solute proton by the aromatic in the complex were evaluated. The thermodynamic parameters for the interactions were obtained from the determination of K at several temperatures. The stoichiometries of the complexes obtained from cryoscopic studies were found to agree with those deduced from spectroscopic investigations. For most systems it is suggested that only one type of complex, of 1:1 stiochiometry, predominates except that for the acetonitrile-benzene system a 1:2 complex is formed. Two sets of dielectric studies were conducted, the first to show that the nature of the interaction is dipole-induced dipole and the second to calculate K. The equilibrium quotients obtained from spectroscopic and dielectric studies are compared. Time-averaged geometries of the complexes are proposed. The orientation of solute, with respect to the aromatic for the 1:1 complexes, appears to be the one in which the solute lies symmetrically about the aromatic six-fold axis whereas for the 1:2 complex, a sandwich structure is proposed. It is suggested that the complexes are formed through a dipole-induced dipole interaction and steric factors play some part in the complex formation.
Resumo:
Many planning and control tools, especially network analysis, have been developed in the last four decades. The majority of them were created in military organization to solve the problem of planning and controlling research and development projects. The original version of the network model (i.e. C.P.M/PERT) was transplanted to the construction industry without the consideration of the special nature and environment of construction projects. It suited the purpose of setting up targets and defining objectives, but it failed in satisfying the requirement of detailed planning and control at the site level. Several analytical and heuristic rules based methods were designed and combined with the structure of C.P.M. to eliminate its deficiencies. None of them provides a complete solution to the problem of resource, time and cost control. VERT was designed to deal with new ventures. It is suitable for project evaluation at the development stage. CYCLONE, on the other hand, is concerned with the design and micro-analysis of the production process. This work introduces an extensive critical review of the available planning techniques and addresses the problem of planning for site operation and control. Based on the outline of the nature of site control, this research developed a simulation based network model which combines part of the logics of both VERT and CYCLONE. Several new nodes were designed to model the availability and flow of resources, the overhead and operating cost and special nodes for evaluating time and cost. A large software package is written to handle the input, the simulation process and the output of the model. This package is designed to be used on any microcomputer using MS-DOS operating system. Data from real life projects were used to demonstrate the capability of the technique. Finally, a set of conclusions are drawn regarding the features and limitations of the proposed model, and recommendations for future work are outlined at the end of this thesis.
Resumo:
Off-highway motive plant equipment is costly in capital outlay and maintenance. To reduce these overheads and increase site safety and workrate, a technique of assessing and limiting the velocity of such equipment is required. Due to the extreme environmental conditions met on such sites, conventional velocity measurement techniques are inappropriate. Ogden Electronics Limited were formed specifically to manufacture a motive plant safety system incorporating a speed sensor and sanction unit; to date, the only such commercial unit available. However, problems plague the reliability, accuracy and mass production of this unit. This project assesses the company's exisiting product, and in conjunction with an appreciation of the company history and structure, concludes that this unit is unsuited to its intended application. Means of improving the measurement accuracy and longevity of this unit, commensurate with the company's limited resources and experience, are proposed, both for immediate retrofit and for longer term use. This information is presented in the form of a number of internal reports for the company. The off-highway environment is examined; and in conjunction with an evaluation of means of obtaining a returned signal, comparisons of processing techniques, and on-site gathering of previously unavailable data, preliminary designs for an alternative product are drafted. Theoretical aspects are covered by a literature review of ground-pointing radar, vehicular radar, and velocity measuring systems. This review establishes and collates the body of knowledge in areas previously considered unrelated. Based upon this work, a new design is proposed which is suitable for incorporation into the existing company product range. Following production engineering of the design, five units were constructed, tested and evaluated on-site. After extended field trials, this design has shown itself to possess greater accuracy, reliability and versatility than the existing sensor, at a lower unit cost.
Resumo:
Analysis of the use of ICT in the aerospace industry has prompted the detailed investigation of an inventory-planning problem. There is a special class of inventory, consisting of expensive repairable spares for use in support of aircraft operations. These items, called rotables, are not well served by conventional theory and systems for inventory management. The context of the problem, the aircraft maintenance industry sector, is described in order to convey some of its special characteristics in the context of operations management. A literature review is carried out to seek existing theory that can be applied to rotable inventory and to identify a potential gap into which newly developed theory could contribute. Current techniques for rotable planning are identified in industry and the literature: these methods are modelled and tested using inventory and operational data obtained in the field. In the expectation that current practice leaves much scope for improvement, several new models are proposed. These are developed and tested on the field data for comparison with current practice. The new models are revised following testing to give improved versions. The best model developed and tested here comprises a linear programming optimisation, which finds an optimal level of inventory for multiple test cases, reflecting changing operating conditions. The new model offers an inventory plan that is up to 40% less expensive than that determined by current practice, while maintaining required performance.
Resumo:
Procedural knowledge is the knowledge required to perform certain tasks. It forms an important part of expertise, and is crucial for learning new tasks. This paper summarises existing work on procedural knowledge acquisition, and identifies two major challenges that remain to be solved in this field; namely, automating the acquisition process to tackle bottleneck in the formalization of procedural knowledge, and enabling machine understanding and manipulation of procedural knowledge. It is believed that recent advances in information extraction techniques can be applied compose a comprehensive solution to address these challenges. We identify specific tasks required to achieve the goal, and present detailed analyses of new research challenges and opportunities. It is expected that these analyses will interest researchers of various knowledge management tasks, particularly knowledge acquisition and capture.
Resumo:
We consider the problem of stable determination of a harmonic function from knowledge of the solution and its normal derivative on a part of the boundary of the (bounded) solution domain. The alternating method is a procedure to generate an approximation to the harmonic function from such Cauchy data and we investigate a numerical implementation of this procedure based on Fredholm integral equations and Nyström discretization schemes, which makes it possible to perform a large number of iterations (millions) with minor computational cost (seconds) and high accuracy. Moreover, the original problem is rewritten as a fixed point equation on the boundary, and various other direct regularization techniques are discussed to solve that equation. We also discuss how knowledge of the smoothness of the data can be used to further improve the accuracy. Numerical examples are presented showing that accurate approximations of both the solution and its normal derivative can be obtained with much less computational time than in previous works.