872 resultados para High-performance computing


Relevância:

90.00% 90.00%

Publicador:

Resumo:

It has never been easy for manufacturing companies to understand their confidence level in terms of how accurate and to what degree of flexibility parts can be made. This brings uncertainty in finding the most suitable manufacturing method as well as in controlling their product and process verification systems. The aim of this research is to develop a system for capturing the company’s knowledge and expertise and then reflect it into an MRP (Manufacturing Resource Planning) system. A key activity here is measuring manufacturing and machining capabilities to a reasonable confidence level. For this purpose an in-line control measurement system is introduced to the company. Using SPC (Statistical Process Control) not only helps to predict the trend in manufacturing of parts but also minimises the human error in measurement. Gauge R&R (Repeatability and Reproducibility) study identifies problems in measurement systems. Measurement is like any other process in terms of variability. Reducing this variation via an automated machine probing system helps to avoid defects in future products.Developments in aerospace, nuclear, oil and gas industries demand materials with high performance and high temperature resistance under corrosive and oxidising environments. Superalloys were developed in the latter half of the 20th century as high strength materials for such purposes. For the same characteristics superalloys are considered as difficult-to-cut alloys when it comes to formation and machining. Furthermore due to the sensitivity of superalloy applications, in many cases they should be manufactured with tight tolerances. In addition superalloys, specifically Nickel based, have unique features such as low thermal conductivity due to having a high amount of Nickel in their material composition. This causes a high surface temperature on the work-piece at the machining stage which leads to deformation in the final product.Like every process, the material variations have a significant impact on machining quality. The main cause of variations can originate from chemical composition and mechanical hardness. The non-uniform distribution of metal elements is a major source of variation in metallurgical structures. Different heat treatment standards are designed for processing the material to the desired hardness levels based on application. In order to take corrective actions, a study on the material aspects of superalloys has been conducted. In this study samples from different batches of material have been analysed. This involved material preparation for microscopy analysis, and the effect of chemical compositions on hardness (before and after heat treatment). Some of the results are discussed and presented in this paper.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Herein we demonstrate a facile, reproducible, and template-free strategy to prepare g-C3N4–Fe3O4 nanocomposites by an in situ growth mechanism. The results indicate that monodisperse Fe3O4 nanoparticles with diameters as small as 8 nm are uniformly deposited on g-C3N4 sheets, and as a result, aggregation of the Fe3O4 nanoparticles is effectively prevented. The as-prepared g-C3N4–Fe3O4 nanocomposites exhibit significantly enhanced photocatalytic activity for the degradation of rhodamine B under visible-light irradiation. Interestingly, the g-C3N4–Fe3O4 nanocomposites showed good recyclability without loss of apparent photocatalytic activity even after six cycles, and more importantly, g-C3N4–Fe3O4 could be recovered magnetically. The high performance of the g-C3N4–Fe3O4 photocatalysts is due to a synergistic effect including the large surface-exposure area, high visible-light-absorption efficiency, and enhanced charge-separation properties. In addition, the superparamagnetic behavior of the as-prepared g-C3N4–Fe3O4 nanocomposites also makes them promising candidates for applications in the fields of lithium storage capacity and bionanotechnology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Human Resource Management, Innovation and Performance investigates the relationship between HRM, innovation and performance. Taking a multi-level perspective the book reflects critically on contentious themes such as high performance work systems, organizational design options, cross-boundary working, leadership styles and learning at work.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The so-called "High Performance Working System" (HPWS) and the lean production are representing the theoretical and methodological foundations of this paper. In this relation it is worth making distinction between various theoretical streams of the HPWS. The first theoretical stream in the literature is focusing on the diffusion of the Japanese-style management and organizational practices both in the US and in the Europe. The second theoretical strand comprises the approach of sociology of work and dealing with the learning/innovation capabilities of the new forms of work organization. Finally, the third theoretical approach is addressing on the types of knowledge and learning process and their relations with the innovation capabilities of the firm. The authors’ analysis is based on the international comparison, both in regional and in cross country comparison. For regional comparison the share of ICT clusters in Europe, USA and the rest of the world was assessed. For the purpose of the cross-country comparison in the EU, the innovation performance measured by the index Innovation Union Scoreboard (IUS) was used in both the before and after the financial crisis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Today, databases have become an integral part of information systems. In the past two decades, we have seen different database systems being developed independently and used in different applications domains. Today's interconnected networks and advanced applications, such as data warehousing, data mining & knowledge discovery and intelligent data access to information on the Web, have created a need for integrated access to such heterogeneous, autonomous, distributed database systems. Heterogeneous/multidatabase research has focused on this issue resulting in many different approaches. However, a single, generally accepted methodology in academia or industry has not emerged providing ubiquitous intelligent data access from heterogeneous, autonomous, distributed information sources. ^ This thesis describes a heterogeneous database system being developed at High-performance Database Research Center (HPDRC). A major impediment to ubiquitous deployment of multidatabase technology is the difficulty in resolving semantic heterogeneity. That is, identifying related information sources for integration and querying purposes. Our approach considers the semantics of the meta-data constructs in resolving this issue. The major contributions of the thesis work include: (i) providing a scalable, easy-to-implement architecture for developing a heterogeneous multidatabase system, utilizing Semantic Binary Object-oriented Data Model (Sem-ODM) and Semantic SQL query language to capture the semantics of the data sources being integrated and to provide an easy-to-use query facility; (ii) a methodology for semantic heterogeneity resolution by investigating into the extents of the meta-data constructs of component schemas. This methodology is shown to be correct, complete and unambiguous; (iii) a semi-automated technique for identifying semantic relations, which is the basis of semantic knowledge for integration and querying, using shared ontologies for context-mediation; (iv) resolutions for schematic conflicts and a language for defining global views from a set of component Sem-ODM schemas; (v) design of a knowledge base for storing and manipulating meta-data and knowledge acquired during the integration process. This knowledge base acts as the interface between integration and query processing modules; (vi) techniques for Semantic SQL query processing and optimization based on semantic knowledge in a heterogeneous database environment; and (vii) a framework for intelligent computing and communication on the Internet applying the concepts of our work. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Since the 1990s, scholars have paid special attention to public management’s role in theory and research under the assumption that effective management is one of the primary means for achieving superior performance. To some extent, this was influenced by popular business writings of the 1980s as well as the reinventing literature of the 1990s. A number of case studies but limited quantitative research papers have been published showing that management matters in the performance of public organizations. ^ My study examined whether or not management capacity increased organizational performance using quantitative techniques. The specific research problem analyzed was whether significant differences existed between high and average performing public housing agencies on select criteria identified in the Government Performance Project (GPP) management capacity model, and whether this model could predict outcome performance measures in a statistically significant manner, while controlling for exogenous influences. My model included two of four GPP management subsystems (human resources and information technology), integration and alignment of subsystems, and an overall managing for results framework. It also included environmental and client control variables that were hypothesized to affect performance independent of management action. ^ Descriptive results of survey responses showed high performing agencies with better scores on most high performance dimensions of individual criteria, suggesting support for the model; however, quantitative analysis found limited statistically significant differences between high and average performers and limited predictive power of the model. My analysis led to the following major conclusions: past performance was the strongest predictor of present performance; high unionization hurt performance; and budget related criterion mattered more for high performance than other model factors. As to the specific research question, management capacity may be necessary but it is not sufficient to increase performance. ^ The research suggested managers may benefit by implementing best practices identified through the GPP model. The usefulness of the model could be improved by adding direct service delivery to the model, which may also improve its predictive power. Finally, there are abundant tested concepts and tools designed to improve system performance that are available for practitioners designed to improve management subsystem support of direct service delivery.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Memory (cache, DRAM, and disk) is in charge of providing data and instructions to a computer's processor. In order to maximize performance, the speeds of the memory and the processor should be equal. However, using memory that always match the speed of the processor is prohibitively expensive. Computer hardware designers have managed to drastically lower the cost of the system with the use of memory caches by sacrificing some performance. A cache is a small piece of fast memory that stores popular data so it can be accessed faster. Modern computers have evolved into a hierarchy of caches, where a memory level is the cache for a larger and slower memory level immediately below it. Thus, by using caches, manufacturers are able to store terabytes of data at the cost of cheapest memory while achieving speeds close to the speed of the fastest one.^ The most important decision about managing a cache is what data to store in it. Failing to make good decisions can lead to performance overheads and over-provisioning. Surprisingly, caches choose data to store based on policies that have not changed in principle for decades. However, computing paradigms have changed radically leading to two noticeably different trends. First, caches are now consolidated across hundreds to even thousands of processes. And second, caching is being employed at new levels of the storage hierarchy due to the availability of high-performance flash-based persistent media. This brings four problems. First, as the workloads sharing a cache increase, it is more likely that they contain duplicated data. Second, consolidation creates contention for caches, and if not managed carefully, it translates to wasted space and sub-optimal performance. Third, as contented caches are shared by more workloads, administrators need to carefully estimate specific per-workload requirements across the entire memory hierarchy in order to meet per-workload performance goals. And finally, current cache write policies are unable to simultaneously provide performance and consistency guarantees for the new levels of the storage hierarchy.^ We addressed these problems by modeling their impact and by proposing solutions for each of them. First, we measured and modeled the amount of duplication at the buffer cache level and contention in real production systems. Second, we created a unified model of workload cache usage under contention to be used by administrators for provisioning, or by process schedulers to decide what processes to run together. Third, we proposed methods for removing cache duplication and to eliminate wasted space because of contention for space. And finally, we proposed a technique to improve the consistency guarantees of write-back caches while preserving their performance benefits.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Compact thermal-fluid systems are found in many industries from aerospace to microelectronics where a combination of small size, light weight, and high surface area to volume ratio fluid networks are necessary. These devices are typically designed with fluid networks consisting of many small parallel channels that effectively pack a large amount of heat transfer surface area in a very small volume but do so at the cost of increased pumping power requirements. ^ To offset this cost the use of a branching fluid network for the distribution of coolant within a heat sink is investigated. The goal of the branch design technique is to minimize the entropy generation associated with the combination of viscous dissipation and convection heat transfer experienced by the coolant in the heat sink while maintaining compact high heat transfer surface area to volume ratios. ^ The derivation of Murray's Law, originally developed to predict the geometry of physiological transport systems, is extended to heat sink designs which minimze entropy generation. Two heat sink designs at different scales are built, and tested experimentally and analytically. The first uses this new derivation of Murray's Law. The second uses a combination of Murray's Law and Constructal Theory. The results of the experiments were used to verify the analytical and numerical models. These models were then used to compare the performance of the heat sink with other compact high performance heat sink designs. The results showed that the techniques used to design branching fluid networks significantly improves the performance of active heat sinks. The design experience gained was then used to develop a set of geometric relations which optimize the heat transfer to pumping power ratio of a single cooling channel element. Each element can be connected together using a set of derived geometric guidelines which govern branch diameters and angles. The methodology can be used to design branching fluid networks which can fit any geometry. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The geochemical cycling of barium was investigated in sediments of pockmarks of the northern Congo Fan, characterized by surface and subsurface gas hydrates, chemosynthetic fauna, and authigenic carbonates. Two gravity cores retrieved from the so-called Hydrate Hole and Worm Hole pockmarks were examined using high-resolution pore-water and solid-phase analyses. The results indicate that, although gas hydrates in the study area are stable with respect to pressure and temperature, they are and have been subject to dissolution due to methane-undersaturated pore waters. The process significantly driving dissolution is the anaerobic oxidation of methane (AOM) above the shallowest hydrate-bearing sediment layer. It is suggested that episodic seep events temporarily increase the upward flux of methane, and induce hydrate formation close to the sediment surface. AOM establishes at a sediment depth where the upward flux of methane from the uppermost hydrate layer counterbalances the downward flux of seawater sulfate. After seepage ceases, AOM continues to consume methane at the sulfate/methane transition (SMT) above the hydrates, thereby driving the progressive dissolution of the hydrates "from above". As a result the SMT migrates downward, leaving behind enrichments of authigenic barite and carbonates that typically precipitate at this biogeochemical reaction front. Calculation of the time needed to produce the observed solid-phase barium enrichments above the present-day depths of the SMT served to track the net downward migration of the SMT and to estimate the total time of hydrate dissolution in the recovered sediments. Methane fluxes were higher, and the SMT was located closer to the sediment surface in the past at both sites. Active seepage and hydrate formation are inferred to have occurred only a few thousands of years ago at the Hydrate Hole site. By contrast, AOM-driven hydrate dissolution as a consequence of an overall net decrease in upward methane flux seems to have persisted for a considerably longer time at the Worm Hole site, amounting to a few tens of thousands of years.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the emerging prevalence of smart phones and 4G LTE networks, the demand for faster-better-cheaper mobile services anytime and anywhere is ever growing. The Dynamic Network Optimization (DNO) concept emerged as a solution that optimally and continuously tunes the network settings, in response to varying network conditions and subscriber needs. Yet, the DNO realization is still at infancy, largely hindered by the bottleneck of the lengthy optimization runtime. This paper presents the design and prototype of a novel cloud based parallel solution that further enhances the scalability of our prior work on various parallel solutions that accelerate network optimization algorithms. The solution aims to satisfy the high performance required by DNO, preliminarily on a sub-hourly basis. The paper subsequently visualizes a design and a full cycle of a DNO system. A set of potential solutions to large network and real-time DNO are also proposed. Overall, this work creates a breakthrough towards the realization of DNO.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Big Data Analytics is an emerging field since massive storage and computing capabilities have been made available by advanced e-infrastructures. Earth and Environmental sciences are likely to benefit from Big Data Analytics techniques supporting the processing of the large number of Earth Observation datasets currently acquired and generated through observations and simulations. However, Earth Science data and applications present specificities in terms of relevance of the geospatial information, wide heterogeneity of data models and formats, and complexity of processing. Therefore, Big Earth Data Analytics requires specifically tailored techniques and tools. The EarthServer Big Earth Data Analytics engine offers a solution for coverage-type datasets, built around a high performance array database technology, and the adoption and enhancement of standards for service interaction (OGC WCS and WCPS). The EarthServer solution, led by the collection of requirements from scientific communities and international initiatives, provides a holistic approach that ranges from query languages and scalability up to mobile access and visualization. The result is demonstrated and validated through the development of lighthouse applications in the Marine, Geology, Atmospheric, Planetary and Cryospheric science domains.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Big Data Analytics is an emerging field since massive storage and computing capabilities have been made available by advanced e-infrastructures. Earth and Environmental sciences are likely to benefit from Big Data Analytics techniques supporting the processing of the large number of Earth Observation datasets currently acquired and generated through observations and simulations. However, Earth Science data and applications present specificities in terms of relevance of the geospatial information, wide heterogeneity of data models and formats, and complexity of processing. Therefore, Big Earth Data Analytics requires specifically tailored techniques and tools. The EarthServer Big Earth Data Analytics engine offers a solution for coverage-type datasets, built around a high performance array database technology, and the adoption and enhancement of standards for service interaction (OGC WCS and WCPS). The EarthServer solution, led by the collection of requirements from scientific communities and international initiatives, provides a holistic approach that ranges from query languages and scalability up to mobile access and visualization. The result is demonstrated and validated through the development of lighthouse applications in the Marine, Geology, Atmospheric, Planetary and Cryospheric science domains.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a novel high symmetry balun which significantly improves the performance of dipole-based dual-polarized antennas. The new balun structure provides enhanced differential capability leading to high performance in terms of port-to-port isolation and far-field cross polarization. An example antenna using this balun is proposed. The simulated results show 53.5% of fractional bandwidth within the band 1.71−2.96 GHz (VSWR<1.5) and port-to-port isolation >59 dB. The radiation characteristic shows around 9 dBi of gain and far-field cross polarization <−48 dBi over the entire bandwidth. The detailed balun functioning and full antenna measurements will be presented during the conference. Performance comparison with similar structures will be also provided.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Through the awareness-raising efforts of several high-profile current and former athletes, the issue of common mental disorders (CMD) in this population is gaining increasing attention from researchers and practitioners alike. Yet the prevalence is unclear and most likely, under-reported. Whilst the characteristics of the sporting environment may generate CMD within the athletic population, it also may exacerbate pre-existing conditions, and hence it is not surprising that sport psychology and sport science practitioners are anecdotally reporting increased incidences of athletes seeking support for CMDs. In a population where there are many barriers to reporting and seeking help for CMD, due in part to the culture of the high performance sporting environment, anecdotal reports suggest that those athletes asking for help are approaching personnel who they are most comfortable talking to. In some cases, this may be a sport scientist, the sport psychologist or sport psychology consultant. Among personnel in the sporting domain, there is a perception that the sport psychologist or sport psychology consultant is best placed to assist athletes seeking assistance for CMD. However, sport psychology as a profession is split by two competing philosophical perspectives; one of which suggests that sport psychologists should work exclusively with athletes on performance enhancement, and the other views the athlete more holistically and accepts that their welfare may directly impact on their performance. To add further complication, the development of the profession of sport psychology varies widely between countries, meaning that practice in this field is not always clearly defined. This article examines case studies that illustrate the blurred lines in applied sport psychology practice, highlighting challenges with the process of referral in the U.K. athletic population. The article concludes with suggestions for ensuring the field of applied sport psychology is continually evolving and reconfiguring to ensure that it continues to meet the demands of its clients.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Due to increasing integration density and operating frequency of today's high performance processors, the temperature of a typical chip can easily exceed 100 degrees Celsius. However, the runtime thermal state of a chip is very hard to predict and manage due to the random nature in computing workloads, as well as the process, voltage and ambient temperature variability (together called PVT variability). The uneven nature (both in time and space) of the heat dissipation of the chip could lead to severe reliability issues and error-prone chip behavior (e.g. timing errors). Many dynamic power/thermal management techniques have been proposed to address this issue such as dynamic voltage and frequency scaling (DVFS), clock gating and etc. However, most of such techniques require accurate knowledge of the runtime thermal state of the chip to make efficient and effective control decisions. In this work we address the problem of tracking and managing the temperature of microprocessors which include the following sub-problems: (1) how to design an efficient sensor-based thermal tracking system on a given design that could provide accurate real-time temperature feedback; (2) what statistical techniques could be used to estimate the full-chip thermal profile based on very limited (and possibly noise-corrupted) sensor observations; (3) how do we adapt to changes in the underlying system's behavior, since such changes could impact the accuracy of our thermal estimation. The thermal tracking methodology proposed in this work is enabled by on-chip sensors which are already implemented in many modern processors. We first investigate the underlying relationship between heat distribution and power consumption, then we introduce an accurate thermal model for the chip system. Based on this model, we characterize the temperature correlation that exists among different chip modules and explore statistical approaches (such as those based on Kalman filter) that could utilize such correlation to estimate the accurate chip-level thermal profiles in real time. Such estimation is performed based on limited sensor information because sensors are usually resource constrained and noise-corrupted. We also took a further step to extend the standard Kalman filter approach to account for (1) nonlinear effects such as leakage-temperature interdependency and (2) varying statistical characteristics in the underlying system model. The proposed thermal tracking infrastructure and estimation algorithms could consistently generate accurate thermal estimates even when the system is switching among workloads that have very distinct characteristics. Through experiments, our approaches have demonstrated promising results with much higher accuracy compared to existing approaches. Such results can be used to ensure thermal reliability and improve the effectiveness of dynamic thermal management techniques.