952 resultados para Data-stream balancing
Resumo:
This thesis studies the possibility to use lean tools and methods in a quotation process which is carried out in an office environment. The aim of the study was to find out and test the relevant lean tools and methods which can help to balance and standardize the quotation process, and reduce the variance in quotation lead times and in quality. Seminal works, researches and guide books related to the topic were used as the basis for the theory development. Based on the literature review and the case company’s own lean experience, the applicable lean tools and methods were selected to be tested by a sales support team. Leveling production, by product categorization and value stream mapping, was a key method to be used to balance the quotation process. 5S method was started concurrently for standardizing the work. Results of the testing period showed that lean tools and methods are applicable in office process and selected tools and methods helped to balance and standardize the quotation process. Case company’s sales support team decided to implement new lean based quotation process model.
Resumo:
With the new age of Internet of Things (IoT), object of everyday such as mobile smart devices start to be equipped with cheap sensors and low energy wireless communication capability. Nowadays mobile smart devices (phones, tablets) have become an ubiquitous device with everyone having access to at least one device. There is an opportunity to build innovative applications and services by exploiting these devices’ untapped rechargeable energy, sensing and processing capabilities. In this thesis, we propose, develop, implement and evaluate LoadIoT a peer-to-peer load balancing scheme that can distribute tasks among plethora of mobile smart devices in the IoT world. We develop and demonstrate an android-based proof of concept load-balancing application. We also present a model of the system which is used to validate the efficiency of the load balancing approach under varying application scenarios. Load balancing concepts can be apply to IoT scenario linked to smart devices. It is able to reduce the traffic send to the Cloud and the energy consumption of the devices. The data acquired from the experimental outcomes enable us to determine the feasibility and cost-effectiveness of a load balanced P2P smart phone-based applications.
Resumo:
THE WAY TO ORGANIZATIONAL LONGEVITY – Balancing stability and change in Shinise firms The overall purpose of this dissertation is to investigate the secret of longevity in Shinise firms. On the basic assumption that organizational longevity is about balancing stability and change, the theoretical perspectives incorporate routine practice, organizational culture, and organizational identity. These theories explain stability and change in an organization separately and in combination. Qualitative inductive theory building was used in the study. Overall, the empirical data comprised 75 in-depth and semi-structured interviews, 137 archival materials, and observations made over 17 weeks. According to the empirical findings, longevity in Shinise firms is attributable to the internal mechanisms (Shinise tenacity, stability in motion, and emergent change) to secure a balance between stability and change, the continuing stability of the socio-cultural environment in the local community, and active interaction between organizational and local cultures. The contribution of the study to the literature on organizational longevity and the alternative theoretical approaches is first, in theorizing the mechanisms of Shinise tenacity and cross-level cultural dynamism, and second, in pointing out the critical role of: the way firms set their ultimate goal, the dynamism in culture, and the effect of history of the firm to the current business in securing longevity. KEY WORDS Change; Culture; Organizational identity; Organizational longevity; Routines; Shinise firms; Stability; Qualitative research
Resumo:
Mathematical predictions of flow conditions along a steep gradient rock bedded stream are examined. Stream gage discharge data and Manning's Equation are used to calculate alternative velocities, and subsequently Froude Numbers, assuming varying values of velocity coefficient, full depth or depth adjusted for vertical flow separation. Comparison of the results with photos show that Froude Numbers calculated from velocities derived from Manning's Equation, assuming a velocity coefficient of 1.30 and full depth, most accurately predict flow conditions, when supercritical flow is defined as Froude Number values above 0.84. Calculated Froude Number values between 0.8 and 1.1 correlate well with observed transitional flow, defined as the first appearance of small diagonal waves. Transitions from subcritical through transitional to clearly supercritical flow are predictable. Froude Number contour maps reveal a sinuous rise and fall of values reminiscent of pool riffle energy distribution.
Resumo:
This paper explores the concept of Value Stream Analysis and Mapping (VSA/M) as applied to Product Development (PD) efforts. Value Stream Analysis and Mapping is a method of business process improvement. The application of VSA/M began in the manufacturing community. PD efforts provide a different setting for the use of VSA/M. Site visits were made to nine major U.S. aerospace organizations. Interviews, discussions, and participatory events were used to gather data on (1) the sophistication of the tools used in PD process improvement efforts, (2) the lean context of the use of the tools, and (3) success of the efforts. It was found that all three factors were strongly correlated, suggesting success depends on both good tools and lean context. Finally, a general VSA/M method for PD activities is proposed. The method uses modified process mapping tools to analyze and improve process.
Resumo:
This paper explores the concept of Value Stream Analysis and Mapping (VSA/M) as applied to Product Development (PD) efforts. Value Stream Analysis and Mapping is a method of business process improvement. The application of VSA/M began in the manufacturing community. PD efforts provide a different setting for the use of VSA/M. Site visits were made to nine major U.S. aerospace organizations. Interviews, discussions, and participatory events were used to gather data on (1) the sophistication of the tools used in PD process improvement efforts, (2) the lean context of the use of the tools, and (3) success of the efforts. It was found that all three factors were strongly correlated, suggesting success depends on both good tools and lean context. Finally, a general VSA/M method for PD activities is proposed. The method uses modified process mapping tools to analyze and improve process.
Resumo:
[1] We present a new, process-based model of soil and stream water dissolved organic carbon (DOC): the Integrated Catchments Model for Carbon (INCA-C). INCA-C is the first model of DOC cycling to explicitly include effects of different land cover types, hydrological flow paths, in-soil carbon biogeochemistry, and surface water processes on in-stream DOC concentrations. It can be calibrated using only routinely available monitoring data. INCA-C simulates daily DOC concentrations over a period of years to decades. Sources, sinks, and transformation of solid and dissolved organic carbon in peat and forest soils, wetlands, and streams as well as organic carbon mineralization in stream waters are modeled. INCA-C is designed to be applied to natural and seminatural forested and peat-dominated catchments in boreal and temperate regions. Simulations at two forested catchments showed that seasonal and interannual patterns of DOC concentration could be modeled using climate-related parameters alone. A sensitivity analysis showed that model predictions were dependent on the mass of organic carbon in the soil and that in-soil process rates were dependent on soil moisture status. Sensitive rate coefficients in the model included those for organic carbon sorption and desorption and DOC mineralization in the soil. The model was also sensitive to the amount of litter fall. Our results show the importance of climate variability in controlling surface water DOC concentrations and suggest the need for further research on the mechanisms controlling production and consumption of DOC in soils.
Resumo:
A regional overview of the water quality and ecology of the River Lee catchment is presented. Specifically, data describing the chemical, microbiological and macrobiological water quality and fisheries communities have been analysed, based on a division into river, sewage treatment works, fish-farm, lake and industrial samples. Nutrient enrichment and the highest concentrations of metals and micro-organics were found in the urbanised, lower reaches of the Lee and in the Lee Navigation. Average annual concentrations of metals were generally within environmental quality standards although, oil many occasions, concentrations of cadmium, copper, lead, mercury and zinc were in excess of the standards. Various organic substances (used as herbicides, fungicides, insecticides, chlorination by-products and industrial solvents) were widely detected in the Lee system. Concentrations of ten micro-organic substances were observed in excess of their environmental quality standards, though not in terms of annual averages. Sewage treatment works were the principal point source input of nutrients. metals and micro-organic determinands to the catchment. Diffuse nitrogen sources contributed approximately 60% and 27% of the in-stream load in the upper and lower Lee respectively, whereas approximately 60% and 20% of the in-stream phosphorus load was derived from diffuse sources in the upper and lower Lee. For metals, the most significant source was the urban runoff from North London. In reaches less affected by effluent discharges, diffuse runoff from urban and agricultural areas dominated trends. Flig-h microbiological content, observed in the River Lee particularly in urbanised reaches, was far in excess of the EC Bathing Water Directive standards. Water quality issues and degraded habitat in the lower reaches of the Lee have led to impoverished aquatic fauna but, within the mid-catchment reaches and upper agricultural tributaries, less nutrient enrichment and channel alteration has permitted more diverse aquatic fauna.
Resumo:
The beds of active ice streams in Greenland and Antarctica are largely inaccessible, hindering a full understanding of the processes that initiate, sustain and inhibit fast ice flow in ice sheets. Detailed mapping of the glacial geomorphology of palaeo-ice stream tracks is, therefore, a valuable tool for exploring the basal processes that control their behaviour. In this paper we present a map that shows detailed glacial geomorphology from a part of the Dubawnt Lake Palaeo-Ice Stream bed on the north-western Canadian Shield (Northwest Territories), which operated at the end of the last glacial cycle. The map (centred on 63 degrees 55 '' 42'N, 102 degrees 29 '' 11'W, approximate scale 1:90,000) was compiled from digital Landsat Enhanced Thematic Mapper Plus satellite imagery and digital and hard-copy stereo-aerial photographs. The ice stream bed is dominated by parallel mega-scale glacial lineations (MGSL), whose lengths exceed several kilometres but the map also reveals that they have, in places, been superimposed with transverse ridges known as ribbed moraines. The ribbed moraines lie on top of the MSGL and appear to have segmented the individual lineaments. This indicates that formation of the ribbed moraines post-date the formation of the MSGL. The presence of ribbed moraine in the onset zone of another palaeo-ice stream has been linked to oscillations between cold and warm-based ice and/or a patchwork of cold-based areas which led to acceleration and deceleration of ice velocity. Our hypothesis is that the ribbed moraines on the Dubawnt Lake Ice Stream bed are a manifestation of the process that led to ice stream shut-down and may be associated with the process of basal freeze-on. The precise formation of ribbed moraines, however, remains open to debate and field observation of their structure will provide valuable data for formal testing of models of their formation.
Resumo:
Ascertaining the location of palaeo-ice streams is crucial in order to produce accurate reconstructions of palaeo-ice sheets and examine interactions with the ocean-climate system. This paper reports evidence for a major ice stream in Amundsen Gulf, Canadian Arctic Archipelago. Mapping from satellite imagery (Landsat ETM+) and digital elevation models, including bathymetric data, is used to reconstruct flow-patterns on southwestern Victoria Island and the adjacent mainland (Nunavut and Northwest Territories). Several flow-sets indicative of ice streaming are found feeding into the marine trough and cross-cutting relationships between these flow-sets (and utilising previously published radiocarbon dates) reveal several phases of ice stream activity centred in Amundsen Gulf and Dolphin and Union Strait. A large erosional footprint on the continental shelf indicates that the ice stream (ca. 1000 km long and ca. 150 km wide) filled Amundsen Gulf, probably at the Last Glacial Maximum. Subsequent to this, the ice stream reorganised as the margin retreated back along the marine trough, eventually splitting into two separate low-gradient lobes in Prince Albert Sound and Dolphin and Union Strait. The location of this major ice stream holds important implications for ice sheet-ocean interactions and specifically, the development of Arctic Ocean ice shelves and the delivery of icebergs into the western Arctic Ocean during the late Pleistocene. Copyright (C) 2006 John Wiley & Sons, Ltd.
Resumo:
In molecular biology, it is often desirable to find common properties in large numbers of drug candidates. One family of methods stems from the data mining community, where algorithms to find frequent graphs have received increasing attention over the past years. However, the computational complexity of the underlying problem and the large amount of data to be explored essentially render sequential algorithms useless. In this paper, we present a distributed approach to the frequent subgraph mining problem to discover interesting patterns in molecular compounds. This problem is characterized by a highly irregular search tree, whereby no reliable workload prediction is available. We describe the three main aspects of the proposed distributed algorithm, namely, a dynamic partitioning of the search space, a distribution process based on a peer-to-peer communication framework, and a novel receiverinitiated load balancing algorithm. The effectiveness of the distributed method has been evaluated on the well-known National Cancer Institute’s HIV-screening data set, where we were able to show close-to linear speedup in a network of workstations. The proposed approach also allows for dynamic resource aggregation in a non dedicated computational environment. These features make it suitable for large-scale, multi-domain, heterogeneous environments, such as computational grids.
Resumo:
We present a general Multi-Agent System framework for distributed data mining based on a Peer-to-Peer model. Agent protocols are implemented through message-based asynchronous communication. The framework adopts a dynamic load balancing policy that is particularly suitable for irregular search algorithms. A modular design allows a separation of the general-purpose system protocols and software components from the specific data mining algorithm. The experimental evaluation has been carried out on a parallel frequent subgraph mining algorithm, which has shown good scalability performances.
Resumo:
One among the most influential and popular data mining methods is the k-Means algorithm for cluster analysis. Techniques for improving the efficiency of k-Means have been largely explored in two main directions. The amount of computation can be significantly reduced by adopting geometrical constraints and an efficient data structure, notably a multidimensional binary search tree (KD-Tree). These techniques allow to reduce the number of distance computations the algorithm performs at each iteration. A second direction is parallel processing, where data and computation loads are distributed over many processing nodes. However, little work has been done to provide a parallel formulation of the efficient sequential techniques based on KD-Trees. Such approaches are expected to have an irregular distribution of computation load and can suffer from load imbalance. This issue has so far limited the adoption of these efficient k-Means variants in parallel computing environments. In this work, we provide a parallel formulation of the KD-Tree based k-Means algorithm for distributed memory systems and address its load balancing issue. Three solutions have been developed and tested. Two approaches are based on a static partitioning of the data set and a third solution incorporates a dynamic load balancing policy.
Resumo:
This paper presents a simple Bayesian approach to sample size determination in clinical trials. It is required that the trial should be large enough to ensure that the data collected will provide convincing evidence either that an experimental treatment is better than a control or that it fails to improve upon control by some clinically relevant difference. The method resembles standard frequentist formulations of the problem, and indeed in certain circumstances involving 'non-informative' prior information it leads to identical answers. In particular, unlike many Bayesian approaches to sample size determination, use is made of an alternative hypothesis that an experimental treatment is better than a control treatment by some specified magnitude. The approach is introduced in the context of testing whether a single stream of binary observations are consistent with a given success rate p(0). Next the case of comparing two independent streams of normally distributed responses is considered, first under the assumption that their common variance is known and then for unknown variance. Finally, the more general situation in which a large sample is to be collected and analysed according to the asymptotic properties of the score statistic is explored. Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
In the past decade, airborne based LIght Detection And Ranging (LIDAR) has been recognised by both the commercial and public sectors as a reliable and accurate source for land surveying in environmental, engineering and civil applications. Commonly, the first task to investigate LIDAR point clouds is to separate ground and object points. Skewness Balancing has been proven to be an efficient non-parametric unsupervised classification algorithm to address this challenge. Initially developed for moderate terrain, this algorithm needs to be adapted to handle sloped terrain. This paper addresses the difficulty of object and ground point separation in LIDAR data in hilly terrain. A case study on a diverse LIDAR data set in terms of data provider, resolution and LIDAR echo has been carried out. Several sites in urban and rural areas with man-made structure and vegetation in moderate and hilly terrain have been investigated and three categories have been identified. A deeper investigation on an urban scene with a river bank has been selected to extend the existing algorithm. The results show that an iterative use of Skewness Balancing is suitable for sloped terrain.