877 resultados para High-performance computing hyperspectral imaging


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Grounded in Vroom’s motivational framework of performance, we examine the interactive influence of collective human capital (ability) and aggregated service orientation (motivation) on the cross-level relationship between high-performance work systems (HPWS) and individual-level service quality. Results of hierarchical linear modeling (HLM) revealed that HPWS related to collective human capital and aggregated service orientation, which in turn related to individual-level service quality. Furthermore, both HLM and ordinary least squares regression analyses revealed a cross-level interaction effect of collective human capital and aggregated service orientation such that high levels of collective human capital and aggregated service orientation influence individual-level service quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article proposes a frequency agile antenna whose operating frequency band can be switched. The design is based on a Vivaldi antenna. High-performance radio-frequency microelectromechanical system (RF-MEMS) switches are used to realize the 2.7 GHz and 3.9 GHz band switching. The low band starts from 2.33 GHz and works until 3.02 GHz and the high band ranges from 3.29 GHz up to 4.58 GHz. The average gains of the antenna at the low and high bands are 10.9 and 12.5 dBi, respectively. This high-gain frequency reconfigurable antenna could replace several narrowband antennas for reducing costs and space to support multiple communication systems, while maintaining good performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A refractive index sensing system has been demonstrated, which is based upon an in-line fibre long period grating Mach-Zehnder interferometer with a heterodyne interrogation technique. This sensing system has comparable accuracy to laboratory-based techniques used in industry such as high performance liquid chromatography and UV spectroscopy. The advantage of this system is that measurements can be made in-situ for applications in continuous process control. Compared to other refractive index sensing schemes using LPGs, this approach has two main advantages. Firstly, the system relies on a simple optical interrogation system and therefore has the real potential for being low cost, and secondly, so far as we are aware it provides the highest refractive index resolution reported for any fibre LPG device.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A tilted fiber Bragg grating (TFBG) was integrated as the dispersive element in a high performance biomedical imaging system. The spectrum emitted by the 23 mm long active region of the fiber is projected through custom designed optics consisting of a cylindrical lens for vertical beam collimation and successively by an achromatic doublet onto a linear detector array. High resolution tomograms of biomedical samples were successfully acquired by the frequency domain OCT-system. Tomograms of ophthalmic and dermal samples obtained by the frequency domain OCT-system were obtained achieving 2.84 μm axial and 10.2 μm lateral resolution. The miniaturization reduces costs and has the potential to further extend the field of application for OCT-systems in biology, medicine and technology. © 2014 SPIE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has never been easy for manufacturing companies to understand their confidence level in terms of how accurate and to what degree of flexibility parts can be made. This brings uncertainty in finding the most suitable manufacturing method as well as in controlling their product and process verification systems. The aim of this research is to develop a system for capturing the company’s knowledge and expertise and then reflect it into an MRP (Manufacturing Resource Planning) system. A key activity here is measuring manufacturing and machining capabilities to a reasonable confidence level. For this purpose an in-line control measurement system is introduced to the company. Using SPC (Statistical Process Control) not only helps to predict the trend in manufacturing of parts but also minimises the human error in measurement. Gauge R&R (Repeatability and Reproducibility) study identifies problems in measurement systems. Measurement is like any other process in terms of variability. Reducing this variation via an automated machine probing system helps to avoid defects in future products.Developments in aerospace, nuclear, oil and gas industries demand materials with high performance and high temperature resistance under corrosive and oxidising environments. Superalloys were developed in the latter half of the 20th century as high strength materials for such purposes. For the same characteristics superalloys are considered as difficult-to-cut alloys when it comes to formation and machining. Furthermore due to the sensitivity of superalloy applications, in many cases they should be manufactured with tight tolerances. In addition superalloys, specifically Nickel based, have unique features such as low thermal conductivity due to having a high amount of Nickel in their material composition. This causes a high surface temperature on the work-piece at the machining stage which leads to deformation in the final product.Like every process, the material variations have a significant impact on machining quality. The main cause of variations can originate from chemical composition and mechanical hardness. The non-uniform distribution of metal elements is a major source of variation in metallurgical structures. Different heat treatment standards are designed for processing the material to the desired hardness levels based on application. In order to take corrective actions, a study on the material aspects of superalloys has been conducted. In this study samples from different batches of material have been analysed. This involved material preparation for microscopy analysis, and the effect of chemical compositions on hardness (before and after heat treatment). Some of the results are discussed and presented in this paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Random fiber lasers blend together attractive features of traditional random lasers, such as low cost and simplicity of fabrication, with high-performance characteristics of conventional fiber lasers, such as good directionality and high efficiency. Low coherence of random lasers is important for speckle-free imaging applications. The random fiber laser with distributed feedback proposed in 2010 led to a quickly developing class of light sources that utilize inherent optical fiber disorder in the form of the Rayleigh scattering and distributed Raman gain. The random fiber laser is an interesting and practically important example of a photonic device based on exploitation of optical medium disorder. We provide an overview of recent advances in this field, including high-power and high-efficiency generation, spectral and statistical properties of random fiber lasers, nonlinear kinetic theory of such systems, and emerging applications in telecommunications and distributed sensing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Herein we demonstrate a facile, reproducible, and template-free strategy to prepare g-C3N4–Fe3O4 nanocomposites by an in situ growth mechanism. The results indicate that monodisperse Fe3O4 nanoparticles with diameters as small as 8 nm are uniformly deposited on g-C3N4 sheets, and as a result, aggregation of the Fe3O4 nanoparticles is effectively prevented. The as-prepared g-C3N4–Fe3O4 nanocomposites exhibit significantly enhanced photocatalytic activity for the degradation of rhodamine B under visible-light irradiation. Interestingly, the g-C3N4–Fe3O4 nanocomposites showed good recyclability without loss of apparent photocatalytic activity even after six cycles, and more importantly, g-C3N4–Fe3O4 could be recovered magnetically. The high performance of the g-C3N4–Fe3O4 photocatalysts is due to a synergistic effect including the large surface-exposure area, high visible-light-absorption efficiency, and enhanced charge-separation properties. In addition, the superparamagnetic behavior of the as-prepared g-C3N4–Fe3O4 nanocomposites also makes them promising candidates for applications in the fields of lithium storage capacity and bionanotechnology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human Resource Management, Innovation and Performance investigates the relationship between HRM, innovation and performance. Taking a multi-level perspective the book reflects critically on contentious themes such as high performance work systems, organizational design options, cross-boundary working, leadership styles and learning at work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The so-called "High Performance Working System" (HPWS) and the lean production are representing the theoretical and methodological foundations of this paper. In this relation it is worth making distinction between various theoretical streams of the HPWS. The first theoretical stream in the literature is focusing on the diffusion of the Japanese-style management and organizational practices both in the US and in the Europe. The second theoretical strand comprises the approach of sociology of work and dealing with the learning/innovation capabilities of the new forms of work organization. Finally, the third theoretical approach is addressing on the types of knowledge and learning process and their relations with the innovation capabilities of the firm. The authors’ analysis is based on the international comparison, both in regional and in cross country comparison. For regional comparison the share of ICT clusters in Europe, USA and the rest of the world was assessed. For the purpose of the cross-country comparison in the EU, the innovation performance measured by the index Innovation Union Scoreboard (IUS) was used in both the before and after the financial crisis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. ^ This thesis describes a heterogeneous database system being developed at High-performance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii) a framework for intelligent computing and communication on the Internet applying the concepts of our work. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation introduces a novel automated book reader as an assistive technology tool for persons with blindness. The literature shows extensive work in the area of optical character recognition, but the current methodologies available for the automated reading of books or bound volumes remain inadequate and are severely constrained during document scanning or image acquisition processes. The goal of the book reader design is to automate and simplify the task of reading a book while providing a user-friendly environment with a realistic but affordable system design. This design responds to the main concerns of (a) providing a method of image acquisition that maintains the integrity of the source (b) overcoming optical character recognition errors created by inherent imaging issues such as curvature effects and barrel distortion, and (c) determining a suitable method for accurate recognition of characters that yields an interface with the ability to read from any open book with a high reading accuracy nearing 98%. This research endeavor focuses in its initial aim on the development of an assistive technology tool to help persons with blindness in the reading of books and other bound volumes. But its secondary and broader aim is to also find in this design the perfect platform for the digitization process of bound documentation in line with the mission of the Open Content Alliance (OCA), a nonprofit Alliance at making reading materials available in digital form. The theoretical perspective of this research relates to the mathematical developments that are made in order to resolve both the inherent distortions due to the properties of the camera lens and the anticipated distortions of the changing page curvature as one leafs through the book. This is evidenced by the significant increase of the recognition rate of characters and a high accuracy read-out through text to speech processing. This reasonably priced interface with its high performance results and its compatibility to any computer or laptop through universal serial bus connectors extends greatly the prospects for universal accessibility to documentation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the 1990s, scholars have paid special attention to public management’s role in theory and research under the assumption that effective management is one of the primary means for achieving superior performance. To some extent, this was influenced by popular business writings of the 1980s as well as the reinventing literature of the 1990s. A number of case studies but limited quantitative research papers have been published showing that management matters in the performance of public organizations. ^ My study examined whether or not management capacity increased organizational performance using quantitative techniques. The specific research problem analyzed was whether significant differences existed between high and average performing public housing agencies on select criteria identified in the Government Performance Project (GPP) management capacity model, and whether this model could predict outcome performance measures in a statistically significant manner, while controlling for exogenous influences. My model included two of four GPP management subsystems (human resources and information technology), integration and alignment of subsystems, and an overall managing for results framework. It also included environmental and client control variables that were hypothesized to affect performance independent of management action. ^ Descriptive results of survey responses showed high performing agencies with better scores on most high performance dimensions of individual criteria, suggesting support for the model; however, quantitative analysis found limited statistically significant differences between high and average performers and limited predictive power of the model. My analysis led to the following major conclusions: past performance was the strongest predictor of present performance; high unionization hurt performance; and budget related criterion mattered more for high performance than other model factors. As to the specific research question, management capacity may be necessary but it is not sufficient to increase performance. ^ The research suggested managers may benefit by implementing best practices identified through the GPP model. The usefulness of the model could be improved by adding direct service delivery to the model, which may also improve its predictive power. Finally, there are abundant tested concepts and tools designed to improve system performance that are available for practitioners designed to improve management subsystem support of direct service delivery.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer's processor. In order to maximize performance, the speeds of the memory and the processor should be equal. However, using memory that always match the speed of the processor is prohibitively expensive. Computer hardware designers have managed to drastically lower the cost of the system with the use of memory caches by sacrificing some performance. A cache is a small piece of fast memory that stores popular data so it can be accessed faster. Modern computers have evolved into a hierarchy of caches, where a memory level is the cache for a larger and slower memory level immediately below it. Thus, by using caches, manufacturers are able to store terabytes of data at the cost of cheapest memory while achieving speeds close to the speed of the fastest one.^ The most important decision about managing a cache is what data to store in it. Failing to make good decisions can lead to performance overheads and over-provisioning. Surprisingly, caches choose data to store based on policies that have not changed in principle for decades. However, computing paradigms have changed radically leading to two noticeably different trends. First, caches are now consolidated across hundreds to even thousands of processes. And second, caching is being employed at new levels of the storage hierarchy due to the availability of high-performance flash-based persistent media. This brings four problems. First, as the workloads sharing a cache increase, it is more likely that they contain duplicated data. Second, consolidation creates contention for caches, and if not managed carefully, it translates to wasted space and sub-optimal performance. Third, as contented caches are shared by more workloads, administrators need to carefully estimate specific per-workload requirements across the entire memory hierarchy in order to meet per-workload performance goals. And finally, current cache write policies are unable to simultaneously provide performance and consistency guarantees for the new levels of the storage hierarchy.^ We addressed these problems by modeling their impact and by proposing solutions for each of them. First, we measured and modeled the amount of duplication at the buffer cache level and contention in real production systems. Second, we created a unified model of workload cache usage under contention to be used by administrators for provisioning, or by process schedulers to decide what processes to run together. Third, we proposed methods for removing cache duplication and to eliminate wasted space because of contention for space. And finally, we proposed a technique to improve the consistency guarantees of write-back caches while preserving their performance benefits.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The move from Standard Definition (SD) to High Definition (HD) represents a six times increases in data, which needs to be processed. With expanding resolutions and evolving compression, there is a need for high performance with flexible architectures to allow for quick upgrade ability. The technology advances in image display resolutions, advanced compression techniques, and video intelligence. Software implementation of these systems can attain accuracy with tradeoffs among processing performance (to achieve specified frame rates, working on large image data sets), power and cost constraints. There is a need for new architectures to be in pace with the fast innovations in video and imaging. It contains dedicated hardware implementation of the pixel and frame rate processes on Field Programmable Gate Array (FPGA) to achieve the real-time performance. ^ The following outlines the contributions of the dissertation. (1) We develop a target detection system by applying a novel running average mean threshold (RAMT) approach to globalize the threshold required for background subtraction. This approach adapts the threshold automatically to different environments (indoor and outdoor) and different targets (humans and vehicles). For low power consumption and better performance, we design the complete system on FPGA. (2) We introduce a safe distance factor and develop an algorithm for occlusion occurrence detection during target tracking. A novel mean-threshold is calculated by motion-position analysis. (3) A new strategy for gesture recognition is developed using Combinational Neural Networks (CNN) based on a tree structure. Analysis of the method is done on American Sign Language (ASL) gestures. We introduce novel point of interests approach to reduce the feature vector size and gradient threshold approach for accurate classification. (4) We design a gesture recognition system using a hardware/ software co-simulation neural network for high speed and low memory storage requirements provided by the FPGA. We develop an innovative maximum distant algorithm which uses only 0.39% of the image as the feature vector to train and test the system design. Database set gestures involved in different applications may vary. Therefore, it is highly essential to keep the feature vector as low as possible while maintaining the same accuracy and performance^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compact thermal-fluid systems are found in many industries from aerospace to microelectronics where a combination of small size, light weight, and high surface area to volume ratio fluid networks are necessary. These devices are typically designed with fluid networks consisting of many small parallel channels that effectively pack a large amount of heat transfer surface area in a very small volume but do so at the cost of increased pumping power requirements. ^ To offset this cost the use of a branching fluid network for the distribution of coolant within a heat sink is investigated. The goal of the branch design technique is to minimize the entropy generation associated with the combination of viscous dissipation and convection heat transfer experienced by the coolant in the heat sink while maintaining compact high heat transfer surface area to volume ratios. ^ The derivation of Murray's Law, originally developed to predict the geometry of physiological transport systems, is extended to heat sink designs which minimze entropy generation. Two heat sink designs at different scales are built, and tested experimentally and analytically. The first uses this new derivation of Murray's Law. The second uses a combination of Murray's Law and Constructal Theory. The results of the experiments were used to verify the analytical and numerical models. These models were then used to compare the performance of the heat sink with other compact high performance heat sink designs. The results showed that the techniques used to design branching fluid networks significantly improves the performance of active heat sinks. The design experience gained was then used to develop a set of geometric relations which optimize the heat transfer to pumping power ratio of a single cooling channel element. Each element can be connected together using a set of derived geometric guidelines which govern branch diameters and angles. The methodology can be used to design branching fluid networks which can fit any geometry. ^