817 resultados para Embedding mappin
Resumo:
A quasi-optical de-embedding technique for characterizing waveguides is demonstrated using wideband time-resolved terahertz spectroscopy. A transfer function representation is adopted for the description of the signal in the input and output port of the waveguides. The time domain responses were discretised and the waveguide transfer function was obtained through a parametric approach in the z-domain after describing the system with an ARX as well as with a state space model. Prior to the identification procedure, filtering was performed in the wavelet domain to minimize signal distortion and the noise propagating in the ARX and subspace models. The model identification procedure requires isolation of the phase delay in the structure and therefore the time-domain signatures must be firstly aligned with respect to each other before they are compared. An initial estimate of the number of propagating modes was provided by comparing the measured phase delay in the structure with theoretical calculations that take into account the physical dimensions of the waveguide. Models derived from measurements of THz transients in a precision WR-8 waveguide adjustable short will be presented.
Resumo:
A set of standards is proposed for university teaching. Embedding these within the Higher Education Academy UK Professional Standards Framework (UKPSF) would allow a more robust assessment of whether a university teacher has met a minimum acceptable threshold.
Resumo:
The Asian summer monsoon is a high dimensional and highly nonlinear phenomenon involving considerable moisture transport towards land from the ocean, and is critical for the whole region. We have used daily ECMWF reanalysis (ERA-40) sea-level pressure (SLP) anomalies to the seasonal cycle, over the region 50-145°E, 20°S-35°N to study the nonlinearity of the Asian monsoon using Isomap. We have focused on the two-dimensional embedding of the SLP anomalies for ease of interpretation. Unlike the unimodality obtained from tests performed in empirical orthogonal function space, the probability density function, within the two-dimensional Isomap space, turns out to be bimodal. But a clustering procedure applied to the SLP data reveals support for three clusters, which are identified using a three-component bivariate Gaussian mixture model. The modes are found to appear similar to active and break phases of the monsoon over South Asia in addition to a third phase, which shows active conditions over the Western North Pacific. Using the low-level wind field anomalies the active phase over South Asia is found to be characterised by a strengthening and an eastward extension of the Somali jet whereas during the break phase the Somali jet is weakened near southern India, while the monsoon trough in northern India also weakens. Interpretation is aided using the APHRODITE gridded land precipitation product for monsoon Asia. The effect of large-scale seasonal mean monsoon and lower boundary forcing, in the form of ENSO, is also investigated and discussed. The outcome here is that ENSO is shown to perturb the intraseasonal regimes, in agreement with conceptual ideas.
Resumo:
The UK government is mandating the use of building information modelling (BIM) in large public projects by 2016. As a result, engineering firms are faced with challenges related to embedding new technologies and associated working practices for the digital delivery of major infrastructure projects. Diffusion of innovations theory is used to investigate how digital innovations diffuse across complex firms. A contextualist approach is employed through an in-depth case study of a large, international engineering project-based firm. The analysis of the empirical data, which was collected over a four-year period of close interaction with the firm, reveals parallel paths of diffusion occurring across the firm, where both the innovation and the firm context were continually changing. The diffusion process is traced over three phases: centralization of technology management, standardization of digital working practices, and globalization of digital resources. The findings describe the diffusion of a digital innovation as multiple and partial within a complex social system during times of change and organizational uncertainty, thereby contributing to diffusion of innovations studies in construction by showing a range of activities and dynamics of a non-linear diffusion process.
Resumo:
Objective: this study aimed to develop a nondecalcified bone sample processing technique enabling immunohistochemical labeling of proteins by kappa-beta nuclear factor (NF-kB) utilizing the Technovit 7200 VCR (R) in adult male Wistar rats. Study Method: A 1.8 mm diameter defect was performed 0.5mm from the femur proximal joint by means of a round bur. Experimental groups were divided according to fixing solution prior to histologic processing: Group 1- ethanol 70%; Group 2-10% buffered formalin; and Group 3- Glycerol diluted in 70% ethanol at a 70/30 ratio + 10% buffered formalin. The post-surgical periods ranged from 01 to 24 hours. Control groups included a nonsurgical procedure group (NSPG) and surgical procedures where bone exposure was performed (SPBE) without drilling. Prostate carcinoma was the positive control (PC) and samples subjected to incomplete immunohistochemistry protocol were the negative control (NC). Following euthanization, all samples were kept at 4 degrees C for 7 days, and were dehydrated in a series of alcohols at -20 degrees C. The polymer embedding procedure was performed at ethanol/polymer ratios of 70%-30%, 50%-50%, 30%-70%, 100%, and 100% for 72 hours at -20 degrees C. Polymerization followed the manufacturer`s recommendation. The samples were grounded and polished to 10-15 mu m thickness, and were deacrylated. The sections were rehydrated and were submitted to the primary polyclonal antibody anti-NF-kB on a 1:75 dilution for 12 hours at room temperature. Results: Microscopy showed that the Group 2 presented positive reaction to NF-kB, diffuse reactions for NSPG and SPBE, and no reaction for the NC group. Conclusion: The results obtained support the feasibility of the developed immunohistochemistry technique.
Resumo:
The NMR spin coupling parameters, (1)J(N,H) and (2)J(H,H), and the chemical shielding, sigma((15)N), of liquid ammonia are studied from a combined and sequential QM/MM methodology. Monte Carlo simulations are performed to generate statistically uncorrelated configurations that are submitted to density functional theory calculations. Two different Lennard-Jones potentials are used in the liquid simulations. Electronic polarization is included in these two potentials via an iterative procedure with and without geometry relaxation, and the influence on the calculated properties are analyzed. B3LYP/aug-cc-pVTZ-J calculations were used to compute the V(N,H) constants in the interval of -67.8 to -63.9 Hz, depending on the theoretical model used. These can be compared with the experimental results of -61.6 Hz. For the (2)J(H,H) coupling the theoretical results vary between -10.6 to -13.01 Hz. The indirect experimental result derived from partially deuterated liquid is -11.1 Hz. Inclusion of explicit hydrogen bonded molecules gives a small but important contribution. The vapor-to-liquid shifts are also considered. This shift is calculated to be negligible for (1)J(N,H) in agreement with experiment. This is rationalized as a cancellation of the geometry relaxation and pure solvent effects. For the chemical shielding, U(15 N) Calculations at the B3LYP/aug-pcS-3 show that the vapor-to-liquid chemical shift requires the explicit use of solvent molecules. Considering only one ammonia molecule in an electrostatic embedding gives a wrong sign for the chemical shift that is corrected only with the use of explicit additional molecules. The best result calculated for the vapor to liquid chemical shift Delta sigma((15)N) is -25.2 ppm, in good agreement with the experimental value of -22.6 ppm.
Resumo:
We propose an approach to the quantum-mechanical description of relativistic orientable objects. It generalizes Wigner`s ideas concerning the treatment of nonrelativistic orientable objects (in particular, a nonrelativistic rotator) with the help of two reference frames (space-fixed and body-fixed). A technical realization of this generalization (for instance, in 3+1 dimensions) amounts to introducing wave functions that depend on elements of the Poincar, group G. A complete set of transformations that test the symmetries of an orientable object and of the embedding space belongs to the group I =GxG. All such transformations can be studied by considering a generalized regular representation of G in the space of scalar functions on the group, f(x,z), that depend on the Minkowski space points xaG/Spin(3,1) as well as on the orientation variables given by the elements z of a matrix ZaSpin(3,1). In particular, the field f(x,z) is a generating function of the usual spin-tensor multi-component fields. In the theory under consideration, there are four different types of spinors, and an orientable object is characterized by ten quantum numbers. We study the corresponding relativistic wave equations and their symmetry properties.
Resumo:
We consider conditions which allow the embedding of linear hypergraphs of fixed size. In particular, we prove that any k-uniform hypergraph H of positive uniform density contains all linear k-uniform hypergraphs of a given size. More precisely, we show that for all integers l >= k >= 2 and every d > 0 there exists Q > 0 for which the following holds: if His a sufficiently large k-uniform hypergraph with the property that the density of H induced on every vertex subset of size on is at least d, then H contains every linear k-uniform hypergraph F with l vertices. The main ingredient in the proof of this result is a counting lemma for linear hypergraphs, which establishes that the straightforward extension of graph epsilon-regularity to hypergraphs suffices for counting linear hypergraphs. We also consider some related problems. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
In 1983, Chvatal, Trotter and the two senior authors proved that for any Delta there exists a constant B such that, for any n, any 2-colouring of the edges of the complete graph K(N) with N >= Bn vertices yields a monochromatic copy of any graph H that has n vertices and maximum degree Delta. We prove that the complete graph may be replaced by a sparser graph G that has N vertices and O(N(2-1/Delta)log(1/Delta)N) edges, with N = [B`n] for some constant B` that depends only on Delta. Consequently, the so-called size-Ramsey number of any H with n vertices and maximum degree Delta is O(n(2-1/Delta)log(1/Delta)n) Our approach is based on random graphs; in fact, we show that the classical Erdos-Renyi random graph with the numerical parameters above satisfies a stronger partition property with high probability, namely, that any 2-colouring of its edges contains a monochromatic universal graph for the class of graphs on n vertices and maximum degree Delta. The main tool in our proof is the regularity method, adapted to a suitable sparse setting. The novel ingredient developed here is an embedding strategy that allows one to embed bounded degree graphs of linear order in certain pseudorandom graphs. Crucial to our proof is the fact that regularity is typically inherited at a scale that is much finer than the scale at which it is assumed. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
This work aims at combining the Chaos theory postulates and Artificial Neural Networks classification and predictive capability, in the field of financial time series prediction. Chaos theory, provides valuable qualitative and quantitative tools to decide on the predictability of a chaotic system. Quantitative measurements based on Chaos theory, are used, to decide a-priori whether a time series, or a portion of a time series is predictable, while Chaos theory based qualitative tools are used to provide further observations and analysis on the predictability, in cases where measurements provide negative answers. Phase space reconstruction is achieved by time delay embedding resulting in multiple embedded vectors. The cognitive approach suggested, is inspired by the capability of some chartists to predict the direction of an index by looking at the price time series. Thus, in this work, the calculation of the embedding dimension and the separation, in Takens‘ embedding theorem for phase space reconstruction, is not limited to False Nearest Neighbor, Differential Entropy or other specific method, rather, this work is interested in all embedding dimensions and separations that are regarded as different ways of looking at a time series by different chartists, based on their expectations. Prior to the prediction, the embedded vectors of the phase space are classified with Fuzzy-ART, then, for each class a back propagation Neural Network is trained to predict the last element of each vector, whereas all previous elements of a vector are used as features.
Resumo:
The rapid development of data transfer through internet made it easier to send the data accurate and faster to the destination. There are many transmission media to transfer the data to destination like e-mails; at the same time it is may be easier to modify and misuse the valuable information through hacking. So, in order to transfer the data securely to the destination without any modifications, there are many approaches like cryptography and steganography. This paper deals with the image steganography as well as with the different security issues, general overview of cryptography, steganography and digital watermarking approaches. The problem of copyright violation of multimedia data has increased due to the enormous growth of computer networks that provides fast and error free transmission of any unauthorized duplicate and possibly manipulated copy of multimedia information. In order to be effective for copyright protection, digital watermark must be robust which are difficult to remove from the object in which they are embedded despite a variety of possible attacks. The message to be send safe and secure, we use watermarking. We use invisible watermarking to embed the message using LSB (Least Significant Bit) steganographic technique. The standard LSB technique embed the message in every pixel, but my contribution for this proposed watermarking, works with the hint for embedding the message only on the image edges alone. If the hacker knows that the system uses LSB technique also, it cannot decrypt correct message. To make my system robust and secure, we added cryptography algorithm as Vigenere square. Whereas the message is transmitted in cipher text and its added advantage to the proposed system. The standard Vigenere square algorithm works with either lower case or upper case. The proposed cryptography algorithm is Vigenere square with extension of numbers also. We can keep the crypto key with combination of characters and numbers. So by using these modifications and updating in this existing algorithm and combination of cryptography and steganography method we develop a secure and strong watermarking method. Performance of this watermarking scheme has been analyzed by evaluating the robustness of the algorithm with PSNR (Peak Signal to Noise Ratio) and MSE (Mean Square Error) against the quality of the image for large amount of data. While coming to see results of the proposed encryption, higher value of 89dB of PSNR with small value of MSE is 0.0017. Then it seems the proposed watermarking system is secure and robust for hiding secure information in any digital system, because this system collect the properties of both steganography and cryptography sciences.
Resumo:
The development of robots has shown itself as a very complex interdisciplinary research field. The predominant procedure for these developments in the last decades is based on the assumption that each robot is a fully personalized project, with the direct embedding of hardware and software technologies in robot parts with no level of abstraction. Although this methodology has brought countless benefits to the robotics research, on the other hand, it has imposed major drawbacks: (i) the difficulty to reuse hardware and software parts in new robots or new versions; (ii) the difficulty to compare performance of different robots parts; and (iii) the difficulty to adapt development needs-in hardware and software levels-to local groups expertise. Large advances might be reached, for example, if physical parts of a robot could be reused in a different robot constructed with other technologies by other researcher or group. This paper proposes a framework for robots, TORP (The Open Robot Project), that aims to put forward a standardization in all dimensions (electrical, mechanical and computational) of a robot shared development model. This architecture is based on the dissociation between the robot and its parts, and between the robot parts and their technologies. In this paper, the first specification for a TORP family and the first humanoid robot constructed following the TORP specification set are presented, as well as the advances proposed for their improvement.
Resumo:
This study shows the implementation and the embedding of an Artificial Neural Network (ANN) in hardware, or in a programmable device, as a field programmable gate array (FPGA). This work allowed the exploration of different implementations, described in VHDL, of multilayer perceptrons ANN. Due to the parallelism inherent to ANNs, there are disadvantages in software implementations due to the sequential nature of the Von Neumann architectures. As an alternative to this problem, there is a hardware implementation that allows to exploit all the parallelism implicit in this model. Currently, there is an increase in use of FPGAs as a platform to implement neural networks in hardware, exploiting the high processing power, low cost, ease of programming and ability to reconfigure the circuit, allowing the network to adapt to different applications. Given this context, the aim is to develop arrays of neural networks in hardware, a flexible architecture, in which it is possible to add or remove neurons, and mainly, modify the network topology, in order to enable a modular network of fixed-point arithmetic in a FPGA. Five synthesis of VHDL descriptions were produced: two for the neuron with one or two entrances, and three different architectures of ANN. The descriptions of the used architectures became very modular, easily allowing the increase or decrease of the number of neurons. As a result, some complete neural networks were implemented in FPGA, in fixed-point arithmetic, with a high-capacity parallel processing
Resumo:
We explore here the issue of duality versus spectrum equivalence in dual theories generated through the master action approach. Specifically we examine a generalized self-dual (GSD) model where a Maxwell term is added to the self-dual model. A gauge embedding procedure applied to the GSD model leads to a Maxwell-Chern-Simons (MCS) theory with higher derivatives. We show here that the latter contains a ghost mode contrary to the original GSD model. By figuring out the origin of the ghost we are able to suggest a new master action which interpolates between the local GSD model and a nonlocal MCS model. Those models share the same spectrum and are ghost free. Furthermore, there is a dual map between both theories at classical level which survives quantum correlation functions up to contact terms. The remarks made here may be relevant for other applications of the master action approach.