927 resultados para implementation method
Resumo:
Microwave sources used in present day applications are either multiplied source derived from basic quartz crystals, or frequency synthesizers. The frequency multiplication method increases FM noise power considerably, and has very low efficiency in addition to being very complex and expensive. The complexity and cost involved demands a simple, compact and tunable microwave source. A tunable dielectric resonator oscillator(DRO) is an ideal choice for such applications. In this paper, the simulation, design and realization of a tunable DRO with a center frequency of 6250 MHz is presented. Simulation has been carried out on HP-Ees of CAD software. Mechanical and electronic tuning features are provided. The DRO operates over a frequency range of 6235 MHz to 6375 MHz. The output power is +5.33 dBm at centre frequency. The performance of the DRO is as per design with respect to phase noise, harmonic levels and tunability. and hence, can conveniently be used for the intended applications.
Resumo:
Corporate Social Responsibility (CSR) has become increasingly important topic in forest industries, and other global companies, in recent years. Globalisation, faster information delivery and demand for sustainable development have set new challenges for global companies in their business operations. Also the importance of stakeholder relations, and pressure to become more transparent has increased in the forest industries. Three dimensions of corporate responsibility economic, environmental and social, are often included in the concept of CSR. Global companies mostly claim that these dimensions are equally important. This study analyses CSR in forest industry and has focus on reporting and implementation of social responsibility in three international companies. These case-companies are Stora Enso, SCA and Sappi, and they have different geographical base, product portfolios and therefore present interesting differences about forest industry strategy and CSR. Global Reporting Initiative (GRI) has created the most known and used reporting framework in CSR reporting. GRI Guidelines have made CSR reporting a uniform function, which can also be measured between companies and different sectors. GRI Guidelines have also made it possible to record and control CSR data in the companies. In recent years the use of GRI Guidelines has increased substantially. Typically CSR reporting on economic and environmental responsibility have been systematic in the global companies and often driven by legistlation and other regulations. However the social responsibility has been less regulated and more difficult to compare. Therefore it has previously been often less focused in the CSR reporting of the global companies. The implementation and use of GRI Guidelines have also increased dialogue on social responsibility issues and stakeholder management in global companies. This study analyses the use of GRI´s framework in the forest industry companies´ CSR reporting. This is a qualitative study and the disclosure of data is empricially analysed using content analysis. Content analysis has been selected as a method for this study because it makes it possible to use different sources of information. The data of this study consists of existing academic literature of CSR, sustainability reports of thecase-companies during 2005-2009, and the semi-structured interviews with company representatives. Different sources provide the possibility to look at specific subject from more than one viewpoint. The results of the study show that all case-companies have relatively common themes in their CSR disclosure, and the differences rise mainly from their product-portfolios, and geographic base. Social impacts to local communities, in the CSR of the companies, were mainly dominated by issues concerning creating wealth to the society and impacting communities through creation of work. The comparability of the CSR reporting, and especially social indicators increased significally from 2007 onwards in all case-companies. Even though the companies claim that three dimensions of CSR economic, environmental and social are equally important economic issues and profit improvement still seem to drive most of the operations in the global companies. Many issues that are covered by laws and regulations are still essentially presented as social responsibility in CSR. However often the unwelcome issues in companies like closing operations are covered just briefly, and without adequate explanation. To make social responsibility equally important in the CSR it would demand more emphasis from all the case-companies. A lot of emphasis should be put especially on the detail and extensiveness of the social reponsibility content in the CSR.
Resumo:
An efficient algorithm within the finite deformation framework is developed for finite element implementation of a recently proposed isotropic, Mohr-Coulomb type material model, which captures the elastic-viscoplastic, pressure sensitive and plastically dilatant response of bulk metallic glasses. The constitutive equations are first reformulated and implemented using an implicit numerical integration procedure based on the backward Euler method. The resulting system of nonlinear algebraic equations is solved by the Newton-Raphson procedure. This is achieved by developing the principal space return mapping technique for the present model which involves simultaneous shearing and dilatation on multiple potential slip systems. The complete stress update algorithm is presented and the expressions for viscoplastic consistent tangent moduli are derived. The stress update scheme and the viscoplastic consistent tangent are implemented in the commercial finite element code ABAQUS/Standard. The accuracy and performance of the numerical implementation are verified by considering several benchmark examples, which includes a simulation of multiple shear bands in a 3D prismatic bar under uniaxial compression.
Resumo:
Experimental realization of quantum information processing in the field of nuclear magnetic resonance (NMR) has been well established. Implementation of conditional phase-shift gate has been a significant step, which has lead to realization of important algorithms such as Grover's search algorithm and quantum Fourier transform. This gate has so far been implemented in NMR by using coupling evolution method. We demonstrate here the implementation of the conditional phase-shift gate using transition selective pulses. As an application of the gate, we demonstrate Grover's search algorithm and quantum Fourier transform by simulations and experiments using transition selective pulses. (C) 2002 Elsevier Science (USA). All rights reserved.
Resumo:
Diffuse optical tomography (DOT) using near-infrared (NIR) light is a promising tool for noninvasive imaging of deep tissue. This technique is capable of quantitative reconstructions of absorption coefficient inhomogeneities of tissue. The motivation for reconstructing the optical property variation is that it, and, in particular, the absorption coefficient variation, can be used to diagnose different metabolic and disease states of tissue. In DOT, like any other medical imaging modality, the aim is to produce a reconstruction with good spatial resolution and accuracy from noisy measurements. We study the performance of a phase array system for detection of optical inhomogeneities in tissue. The light transport through a tissue is diffusive in nature and can be modeled using diffusion equation if the optical parameters of the inhomogeneity are close to the optical properties of the background. The amplitude cancellation method that uses dual out-of-phase sources (phase array) can detect and locate small objects in turbid medium. The inverse problem is solved using model based iterative image reconstruction. Diffusion equation is solved using finite element method for providing the forward model for photon transport. The solution of the forward problem is used for computing the Jacobian and the simultaneous equation is solved using conjugate gradient search. The simulation studies have been carried out and the results show that a phase array system can resolve inhomogeneities with sizes of 5 mm when the absorption coefficient of the inhomogeneity is twice that of the background tissue. To validate this result, a prototype model for performing a dual-source system has been developed. Experiments are carried out by inserting an inhomogeneity of high optical absorption coefficient in an otherwise homogeneous phantom while keeping the scattering coefficient same. The high frequency (100 MHz) modulated dual out-of-phase laser source light is propagated through the phantom. The interference of these sources creates an amplitude null and a phase shift of 180° along a plane between the two sources with a homogeneous object. A solid resin phantom with inhomogeneities simulating the tumor is used in our experiment. The amplitude and phase changes are found to be disturbed by the presence of the inhomogeneity in the object. The experimental data (amplitude and the phase measured at the detector) are used for reconstruction. The results show that the method is able to detect multiple inhomogeneities with sizes of 4 mm. The localization error for a 5 mm inhomogeneity is found to be approximately 1 mm.
Resumo:
We address the problem of sampling and reconstruction of two-dimensional (2-D) finite-rate-of-innovation (FRI) signals. We propose a three-channel sampling method for efficiently solving the problem. We consider the sampling of a stream of 2-D Dirac impulses and a sum of 2-D unit-step functions. We propose a 2-D causal exponential function as the sampling kernel. By causality in 2-D, we mean that the function has its support restricted to the first quadrant. The advantage of using a multichannel sampling method with causal exponential sampling kernel is that standard annihilating filter or root-finding algorithms are not required. Further, the proposed method has inexpensive hardware implementation and is numerically stable as the number of Dirac impulses increases.
Resumo:
It is essential to accurately estimate the working set size (WSS) of an application for various optimizations such as to partition cache among virtual machines or reduce leakage power dissipated in an over-allocated cache by switching it OFF. However, the state-of-the-art heuristics such as average memory access latency (AMAL) or cache miss ratio (CMR) are poorly correlated to the WSS of an application due to 1) over-sized caches and 2) their dispersed nature. Past studies focus on estimating WSS of an application executing on a uniprocessor platform. Estimating the same for a chip multiprocessor (CMP) with a large dispersed cache is challenging due to the presence of concurrently executing threads/processes. Hence, we propose a scalable, highly accurate method to estimate WSS of an application. We call this method ``tagged WSS (TWSS)'' estimation method. We demonstrate the use of TWSS to switch-OFF the over-allocated cache ways in Static and Dynamic NonUniform Cache Architectures (SNUCA, DNUCA) on a tiled CMP. In our implementation of adaptable way SNUCA and DNUCA caches, decision of altering associativity is taken by each L2 controller. Hence, this approach scales better with the number of cores present on a CMP. It gives overall (geometric mean) 26% and 19% higher energy-delay product savings compared to AMAL and CMR heuristics on SNUCA, respectively.
Resumo:
In the domain of manual mechanical assembly, expert knowledge is an important means of supporting assembly planning that leads to fewer issues during actual assembly. Knowledge based systems can be used to provide assembly planners with expert knowledge as advice. However, acquisition of knowledge remains a difficult task to automate, while manual acquisition is tedious, time-consuming, and requires engagement of knowledge engineers with specialist knowledge to understand and translate expert knowledge. This paper describes the development, implementation and preliminary evaluation of a method that asks a series of questions to an expert, so as to automatically acquire necessary diagnostic and remedial knowledge as rules for use in a knowledge based system for advising assembly planners diagnose and resolve issues. The method, called a questioning procedure, organizes its questions around an assembly situation which it presents to the expert as the context, and adapts its questions based on the answers it receives from the expert. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
A Field Programmable Gate Array (FPGA) based hardware accelerator for multi-conductor parasitic capacitance extraction, using Method of Moments (MoM), is presented in this paper. Due to the prohibitive cost of solving a dense algebraic system formed by MoM, linear complexity fast solver algorithms have been developed in the past to expedite the matrix-vector product computation in a Krylov sub-space based iterative solver framework. However, as the number of conductors in a system increases leading to a corresponding increase in the number of right-hand-side (RHS) vectors, the computational cost for multiple matrix-vector products present a time bottleneck, especially for ill-conditioned system matrices. In this work, an FPGA based hardware implementation is proposed to parallelize the iterative matrix solution for multiple RHS vectors in a low-rank compression based fast solver scheme. The method is applied to accelerate electrostatic parasitic capacitance extraction of multiple conductors in a Ball Grid Array (BGA) package. Speed-ups up to 13x over equivalent software implementation on an Intel Core i5 processor for dense matrix-vector products and 12x for QR compressed matrix-vector products is achieved using a Virtex-6 XC6VLX240T FPGA on Xilinx's ML605 board.
Resumo:
The Cognitive Radio (CR) is a promising technology which provides a novel way to subjugate the issue of spectrum underutilization caused due to the fixed spectrum assignment policies. In this paper we report the design and implementation of a soft-real time CR MAC, consisting of multiple secondary users, in a frequency hopping (Fit) primary scenario. This MAC is capable of sensing the spectrum and dynamically allocating the available frequency bands to multiple CR users based on their QoS requirements. As the primary is continuously hopping, a method has also been implemented to detect the hop instant of the primary network. Synchronization usually requires real time support, however we have been able to achieve this with a soft-real time technique which enables a fully software implementation of CR MAC layer. We demonstrate the wireless transmission and reception of video over this CR testbed through opportunistic spectrum access. The experiments carried out use an open source software defined radio package called GNU Radio and a basic radio hardware component USRP.
Resumo:
A new automatic algorithm for the assessment of mixed mode crack growth rate characteristics is presented based on the concept of an equivalent crack. The residual ligament size approach is introduced to implementation this algorithm for identifying the crack tip position on a curved path with respect to the drop potential signal. The automatic algorithm accounting for the curvilinear crack trajectory and employing an electrical potential difference was calibrated with respect to the optical measurements for the growing crack under cyclic mixed mode loading conditions. The effectiveness of the proposed algorithm is confirmed by fatigue tests performed on ST3 steel compact tension-shear specimens in the full range of mode mixities from pure mode Ito pure mode II. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a simple hysteretic method to obtain the energy required to operate the gate-drive, sensors, and other circuits within nonneutral ac switches intended for use in load automated buildings. The proposed method features a switch-mode low part-count self-powered MOSFET ac switch that achieves efficiency and load current THD figures comparable to those of an externally gate-driven switch built using similar MOSFETS. The fundamental operation of the method is explained in detail, followed by the modifications required for practical implementation. Certain design rules that allow the method to accommodate a wide range of single-phase loads from 10 VA to 1 kVA are discussed, along with an efficiency enhancement feature based on inherent MOSFET characteristics. The limitations and side effects of the method are also mentioned according to their levels of severity. Finally, experimental results obtained using a prototype sensor switch are presented, along with a performance comparison of the prototype with an externally gate-driven MOSFET switch.
Resumo:
In this article, a Field Programmable Gate Array (FPGA)-based hardware accelerator for 3D electromagnetic extraction, using Method of Moments (MoM) is presented. As the number of nets or ports in a system increases, leading to a corresponding increase in the number of right-hand-side (RHS) vectors, the computational cost for multiple matrix-vector products presents a time bottleneck in a linear-complexity fast solver framework. In this work, an FPGA-based hardware implementation is proposed toward a two-level parallelization scheme: (i) matrix level parallelization for single RHS and (ii) pipelining for multiple-RHS. The method is applied to accelerate electrostatic parasitic capacitance extraction of multiple nets in a Ball Grid Array (BGA) package. The acceleration is shown to be linearly scalable with FPGA resources and speed-ups over 10x against equivalent software implementation on a 2.4GHz Intel Core i5 processor is achieved using a Virtex-6 XC6VLX240T FPGA on Xilinx's ML605 board with the implemented design operating at 200MHz clock frequency. (c) 2016 Wiley Periodicals, Inc. Microwave Opt Technol Lett 58:776-783, 2016
Resumo:
A slope failure is developed due to progressive external loads and deteriorations of slope geomaterials, thus forming a progressive and dynamic development and occurrence of landslides. Site geological properties and other active factors such as hydrodynamic load and human activities are complex and usually unknown, thus this dynamic development and occurrence of landslides can only be understood through the progressive accumulation of knowledge on the landslides. For such a progressive process, this paper proposes a dynamic comprehensive control method for landslide control. This control method takes full advantage of updated monitoring data and site investigations of landslides, and emphasizes the implementation of possible measures for landslide control at reasonable stages and in different groups. These measures are to prevent the occurrence of a landslide disaster. As a case study, a landslide project at the Panluo open-pit iron mine is analyzed to illustrate this dynamic comprehensive control method.
Resumo:
Turbidity measurement for the absolute coagulation rate constant of suspensions has been extensively adopted because of its simplicity and easy implementation. A key factor to derive the rate constant from experimental data is how to theoretically evaluate the so-called optical factor involved in calculating the extinction cross section of doublets formed in the aggregation. In a previous paper, we have shown that compared with other theoretical approaches, the T-matrix method provides a robust solution to this problem and is effective in extending the applicability range of the turbidity methodology as well as increasing measurement accuracy. This paper will provide a more comprehensive discussion about the physical insight of using the T-matrix method in turbidity measurement and associated technical details. In particular, the importance of ensuring the correct value for the refractive indices for colloidal particles and the surrounding medium used in the calculation is addressed because the indices generally vary with the wavelength of the incident light. The comparison of calculated results with experiments shows that the T-matrix method can correctly calculate optical factors even for large particles, whereas other existing theories cannot. In addition, the calculated data of the optical factor by the T-matrix method for a range of particle radii and incident light wavelengths are listed.