850 resultados para High-dimensional data visualization
Resumo:
To extend the cross-hole seismic 2D data to outside 3D seismic data, reconstructing the low frequency data to high frequency data is necessary. Blind deconvolution method is a key technology. In this paper, an implementation of Blind deconvolution is introduced. And optimized precondition conjugate gradient method is used to improve the stability of the algorithm and reduce the computation. Then high-frequency retrieved Seismic data and the cross-hole seismic data is combined for constraint inversion. Real data processing proved the method is effective. To solve the problem that the seismic data resolution can’t meet the request of reservoir prediction in the river face thin-layers in Chinese eastern oil fields, a high frequency data reconstruction method is proposed. The extrema of the seismic data are used to get the modulation function which operated with the original seismic data to get the high frequency part of the reconstruction data to rebuild the wide band data. This method greatly saves the computation, and easy to adjust the parameters. In the output profile, the original features of the seismic events are kept, the common feint that breaking the events and adding new zeros to produce alias is avoided. And the interbeded details are enhanced compared to the original profiles. The effective band of seismic data is expended and the method is approved by the processing of the field data. Aim to the problem in the exploration and development of Chinese eastern oil field that the high frequency log data and the relative low frequency seismic data can’t be merged, a workflow of log data extrapolation constrained by time-phase model based on local wave decomposition is raised. The seismic instantaneous phase is resolved by local wave decomposition to build time-phase model, the layers beside the well is matched to build the relation of log and seismic data, multiple log info is extrapolated constrained by seismic equiphase map, high precision attributes inverse sections are produced. In the course of resolve the instantaneous phase, a new method of local wave decomposition --Hilbert transform mean mode decomposition(HMMD) is raised to improve the computation speed and noise immunity. The method is applied in the high resolution reservoir prediction in Mao2 survey of Daqing oil field, Multiple attributes profiles of wave impedance, gamma-ray, electrical resistivity, sand membership degree are produced, of which the resolution is high and the horizontal continuous is good. It’s proved to be a effective method for reservoir prediction and estimation.
Resumo:
The continent of eastern China, especially the North China Craton (NCC), has endured intensive tectonic renovation during Mesozoic and Cenozoic, with the presence of widespread magmatism, high heat flow and development of large sedimentary basins and mountain ranges. The cratonic lithosphere of the region has been destroyed remarkably, which is characterized by not only a significant reduction in thickness but also complex modifications in physical and chemical properties of the lithosphere. As for the tectonic regime controlling the evolution of the NCC, various models have been put forward, including the impingement of mantle plumes (“mushroom cloud” model), the collision of south China block and north China block, the subduction of the Pacific plate, etc. Lithosphere delamination and thermal erosion were proposed as the two end-member mechanisms of the lithospheric thinning. However, given the paucity of the data, deep structural evidence is currently still scarce for distinguishing and testifying these models. To better understand the deep structure of the NCC, from 2000 to the present, temporary seismic array observations have been conducted in the NCC by the Seismological Laboratory of the Institute of the Geology and Geophysics, Chinese Academy of Sciences under the North China Interior Structure Project (NCISP). Many arrays extend from the North China Craton and the off-craton regions, and traverse a lot of main tectonic boundaries. A total of more than 300 broadband seismic stations have been deployed along several profiles that traversed the major tectonic units within the craton’s interior, at the boundary areas and in the neighboring off-craton regions. These stations recorded abundant high-quality data, which provides an unprecedented opportunity for us to unravel the deep structural features of the NCC using seismological methods. Among all the seismological methods, the surface wave method appears to be an efficient and widely adopted technique in studying the crustal and upper mantle structures. In particular, it can provide the absolute values of S-wave velocity that are difficult to obtain with other methods. Benefiting from the deployment of dense seismic arrays, progresses have been made in improving the spatial resolution of surface wave imaging, which makes it possible to resolve the fine-scale velocity structures of the crust and upper mantle based on surface wave analysis. Meanwhile, the differences in the S-wave velocities derived from Rayleigh and Love wave data can provide information on the radial anisotropy beneath the seismic arrays. In this thesis, using the NCISP-III broadband data and based on phase velocity dispersion analysis and inversion of fundamental mode Rayleigh and Love waves, I investigated the lateral variations in the S-wave velocity structure of the crust and uppermost mantle beneath the Yanshan Belt and adjacent regions at the northeastern boundary of the NCC. Based on the constructed structural images, I discussed possible deep processes of the craton destruction in the study region.
Resumo:
The conventional microtremor survey is based on the single point of exploration, which includes collecting field data,estimating the phase velocity, investing the dispersion curve and obtaining the S–wave velocity structure. In the case of large-scale exploration, and when making the two-dimensional velocity section, the inversion is quite time-consuming, laborious and its precision depends on the subjective interpretation, which makes the results differently from person to person. In fact,we do not need the S-wave velocity values but only need the relative variation of velocity. For these reasons, this paper is desired to calculate the apparent S-wave velocity (Vx) to replace the S-wave velocity inversion and to obtain the relative variation of the S-wave velocity. Using this method, we can decrease the analysts’ effect, shorten the data processing time and improve work efficiency. The apparent S-wave velocity is a variable of the surface wave property, which can clearly reflect the downcast columns, mined-out areas and other unusual geological bodies. In this paper, Matlab is used to establish the three-dimensional data volume of the apparent S-wave velocity, from which we can get any apparent S-wave velocity section we need. Through the application case, the designed method is proved to be reliable and effective. The downcast columns, mined-out areas and other unusual geological bodies can be clearly showed in the apparent S-wave velocity section. And from the contour of the apparent S-wave velocity, the interface shape of the major target layers can be controlled basically.
Resumo:
The Study on rheology of the lithosphere and the environments of the seismogenic layer is currently the basic project of the international earthquake research. Yunnan is the ideal place for studying this project. Through the multi-disciplinary comprehensive study of petrology, geophysics, seismo-geology, rock mechanics, etc., the depth-strength profiles of the lithosphere have been firstly constructed, and the seismogenic layer and its geophysical and tectonic environments in Yunnan have been systematically expounded in this paper. The related results achieved are of the important significances for further understanding the mechanism of strong earthquake generation, dividing the potential foci and exposing recent geodynamical processes in Yunnan. Through the comprehensive contrast of the metamorphic rocks in early and middle Proterozoic outcropping on the surface, DSS data and experimental data of rock seismic velocity under high temperature and high pressure, the petrological structure of the crust and upper mantle has been studied on Yunnan: the upper, middle and lower crust is composed of the metamorphic rocks of greenschist, amphibolite and granulite facies, respectively or granitoids, diorites and gabbros, respectively, and the upper mantle composed of the peridotites. Through the contrast studies of the heat flow and epicenters of the strong earthquakes, the distribution of the geotemperature and the data of focal depth, the relationship of between seismicity and geothermal structure of the lithosphere in Yunnan has been studied: the strong earthquakes with magnitude M ≥ 6.0 mainly take place at the geothermal gradient zone, and the seismic foci densely distribute between 200~500 ℃ isogeotherms. On the basis of studies of the rock properties and constituents of the crust and upper mantle and geothermal structure of the lithosphere, the structure of the rheological stratification of the lithosphere has been studied, and the corresponding depth-strength profiles have been constructed in Yunnan. The lithosphere in majority region of Yunnan has the structure of the rheological stratification, i.e. the brittle regime in the upper crust or upper part of the upper crust, ductile regime in the middle crust or lower part of the upper crust to middle crust, ductile regime in the lower crust and ductile regime in the subcrustal lithosphere. The rheological stratification has the quite marked lateral variations in the various tectonic units. The distributions of the seismogenic layer have been determined by using the high accurate data of focal depth. Through the contrast of the petrological structure, the structure of seismic velocity, electric structure, geotemperature structure, and rheological structure and the study of the focal mechanism in the seismogenic layer, the geophysical environments of the seismogenic layer in Yunnan have been studied. The seismogenic layer in Yunnan is located at the depths of 3 ~ 20 km; the rocks in the seismogenic layer are composed of the metamorphic rocks of greenschist to amphibolite facies (or granites to diorites); the seismogenic layer and its internal focal regions of strong earthquakes have the structure of medium properties with the relatively high seismic velocity, high density and high resistivity; there exists the intracrustal low seismic velocity and high conductivity layer bellow the seismogenic layer, the geotemperature is generally 100~500 ℃ in the depth range in which the seismogenic layer is located. The horizontal stress field predominates in the seismogenic layer, the seismogenic layer corresponds to the brittle regime of the upper crust or brittle regime of the upper crust to semibrittle regime of the middle crust. The formation of the seismogenic layer, preparedness and occurrence of the strong earthquakes is the result of the comprehensive actions of the source fault, rock constituent, structure of the medium properties, distribution of the geotemperature, rheological structure of the seismogenic layer and its external environments. Through the study of the structure, active nature, slip rate, segmentation of the active faults, and seismogenic faults, the tectonic environments of the seismogenic layer in Yunnan have been studied. The source faults of the seismogenic layer in Yunnan are mainly A-type ones and embody mainly the strike slip faults with high dip angle. the source faults are the right-lateral strike slip ones with NW-NNW trend and left-lateral strike slip ones with NE-NEE trend in Southwestern Yunnan, the right-lateral strike slip ones with NNW trend and left-lateral strike slip ones with NNE trend (partially normal ones) in Northwestern Yunnan, the right-lateral strike slip ones with NWW trend in Central Yunnan and left-lateral strike slip ones with NW-NNW trend in Eastern Yunnan. Taking Lijiang earthquake with Ms = 7.0 for example. The generating environments of the strong earthquake and seismogenic mechanical mechanism have been studied: the source region of the strong earthquake has the media structure with the relatively high seismic velocity and high resistivity, there exists the intracrustal low velocity and high conductivity layer bellow it and the strong earthquakes occur near the transitional zone of the crustal brittle to ductile deformation. These characteristics are the generality of the generating environments of strong earthquakes. However, the specific seismogenic tectonic environments and action of the stress field of the seismic source in the various regions, correspondingly constrains the dislocation and rupture mechanical mechanism of source fault of strong earthquake.
Resumo:
The modeling formula based on seismic wavelet can well simulate zero - phase wavelet and hybrid-phase wavelet, and approximate maximal - phase and minimal - phase wavelet in a certain sense. The modeling wavelet can be used as wavelet function after suitable modification item added to meet some conditions. On the basis of the modified Morlet wavelet, the derivative wavelet function has been derived. As a basic wavelet, it can be sued for high resolution frequency - division processing and instantaneous feature extraction, in acoordance with the signal expanding characters in time and scale domains by each wavelet structured. Finally, an application example proves the effectiveness and reasonability of the method. Based on the analysis of SVD (Singular Value Decomposition) filter, by taking wavelet as basic wavelet and combining SVD filter and wavelet transform, a new de - noising method, which is Based on multi - dimension and multi-space de - noising method, is proposed. The implementation of this method is discussed the detail. Theoretical analysis and modeling show that the method has strong capacity of de - noising and keeping attributes of effective wave. It is a good tool for de - noising when the S/N ratio is poor. To give prominence to high frequency information of reflection event of important layer and to take account of other frequency information under processing seismic data, it is difficult for deconvolution filter to realize this goal. A filter from Fourier Transform has some problems for realizing the goal. In this paper, a new method is put forward, that is a method of processing seismic data in frequency division from wavelet transform and reconstruction. In ordinary seismic processing methods for resolution improvement, deconvolution operator has poor part characteristics, thus influencing the operator frequency. In wavelet transform, wavelet function has very good part characteristics. Frequency - division data processing in wavelet transform also brings quite good high resolution data, but it needs more time than deconvolution method does. On the basis of frequency - division processing method in wavelet domain, a new technique is put forward, which involves 1) designing filter operators equivalent to deconvolution operator in time and frequency domains in wavelet transform, 2) obtaining derivative wavelet function that is suitable to high - resolution seismic data processing, and 3) processing high resolution seismic data by deconvolution method in time domain. In the method of producing some instantaneous characteristic signals by using Hilbert transform, Hilbert transform is very sensitive to high - frequency random noise. As a result, even though there exist weak high - frequency noises in seismic signals, the obtained instantaneous characteristics of seismic signals may be still submerged by the noises. One method for having instantaneous characteristics of seismic signals in wavelet domain is put forward, which obtains directly the instantaneous characteristics of seismic signals by taking the characteristics of both the real part (real signals, namely seismic signals) and the imaginary part (the Hilbert transfom of real signals) of wavelet transform. The method has the functions of frequency division and noise removal. What is more, the weak wave whose frequency is lower than that of high - frequency random noise is retained in the obtained instantaneous characteristics of seismic signals, and the weak wave may be seen in instantaneous characteristic sections (such as instantaneous frequency, instantaneous phase and instantaneous amplitude). Impedance inversion is one of tools in the description of oil reservoir. one of methods in impedance inversion is Generalized Linear Inversion. This method has higher precision of inversion. But, this method is sensitive to noise of seismic data, so that error results are got. The description of oil reservoir in researching important geological layer, in order to give prominence to geological characteristics of the important layer, not only high frequency impedance to research thin sand layer, but other frequency impedance are needed. It is difficult for some impedance inversion method to realize the goal. Wavelet transform is very good in denoising and processing in frequency division. Therefore, in the paper, a method of impedance inversion is put forward based on wavelet transform, that is impedance inversion in frequency division from wavelet transform and reconstruction. in this paper, based on wavelet transform, methods of time - frequency analysis is given. Fanally, methods above are in application on real oil field - Sansan oil field.
Resumo:
This thesis mainly talks about the wavelet transfrom and the frequency division method. It describes the frequency division processing on prestack or post-stack seismic data and application of inversion noise attenuation, frequency division residual static correction and high resolution data in reservoir inversion. This thesis not only describes the frequency division and inversion in theory, but also proves it by model calculation. All the methods are integrated together. The actual data processing demonstrates the applying results. This thesis analyzes the differences and limitation between t-x prediction filter and f-x prediction filter noise attenuation from wavelet transform theory. It considers that we can do the frequency division attenuation process of noise and signal by wavelet frequency division theory according to the differences of noise and signal in phase, amplitude and frequency. By comparison with the f-x coherence noise, removal method, it approves the effects and practicability of frequency division in coherence and random noise isolation. In order to solve the side effects in non-noise area, we: take the area constraint method and only apply the frequency division processing in the noise area. So it can solve the problem of low frequency loss in non-noise area. The residual moveout differences in seismic data processing have a great effect on stack image and resolutions. Different frequency components have different residual moveout differences. The frequency division residual static correction realizes the frequency division and the calculation of residual correction magnitude. It also solves the problems of different residual correction magnitude in different frequency and protects the high frequency information in data. By actual data processing, we can get good results in phase residual moveout differences elimination of pre-stack data, stack image quality and improvement of data resolution. This thesis analyses the characters of the random noises and its descriptions in time domain and frequency domain. Furthermore it gives the inversion prediction solution methods and realizes the frequency division inversion attenuation of the random noise. By the analysis of results of the actual data processing, we show that the noise removed by inversion has its own advantages. By analyzing parameter's about resolution and technology of high resolution data processing, this thesis describes the relations between frequency domain and resolution, parameters about resolution and methods to increase resolution. It also gives the processing flows of the high resolution data; the effect and influence of reservoir inversion caused by high resolution data. Finally it proves the accuracy and precision of the reservoir inversion results. The research results of this thesis reveal that frequency division noise attenuation, frequency residual correction and inversion noise attenuation are effective methods to increase the SNR and resolution of seismic data.
Resumo:
Intense tectonic renovation has occurred in the eastern continent of china since Mesozoic, as evidenced by the high heat flow, widespread magma extrusion and volcanic activities, and development of large sedimentary basins. To explain the cause and mechanism for the tectonic process in this period, some researchers have put forward various models, such as mantle plume, subduction of the Pacific slab, Yangtze Block-North China Block collision, etc. Their seismological evidence, however, is still scarce..During the period from 2000 to 2003, large temporary seismic arrays were established in North China by the Institute of the Geology and Geophysics, Chinese Academy of Sciences. Total 129 portable seismic stations were linearly emplaced across the western and eastern boundaries of the Bohai Bay Basin, and accumulated a large amount of high-quality data. Moreover, abundant data were also collected at the capital digital seismic network established in the ninth five-year period of national economic and social development. These provide an unprecedented opportunity for us to study the deep structure and associated geodynamic mechanism of lithospheric processes in North China using seismological techniques.Seismology is a kind of observation-based science. The development of seismic observations greatly promotes the improvement of seismologic theory and methodology. At the beginning of this thesis, I review the history of seismic observation progress, and present some routine processing techniques used in the array seismology. I also introduce two popular seismic imaging methods (receiver function method and seismic tomography).Receiver function method has been widely used to study the crustal and upper mantle structures, and many relevant research results have been published. In this thesis I elaborate the theory of this method, including the basic concept of receiver functions and the methodology for data pre-processing, stacking and migration. I also address some problems often encountered in practical applications of receiver function imaging.By using the teleseismic data collected at the temporary seismic arrays in North China, in particular, the traveltime information of P-to-S conversion and multiple reverberations of the Moho discontinuity, I obtain the distributions of the crustal thickness and the poisson ratio at the northwest boundary area of the Bohai Bay Basin and discuss the geological implications of the results.Through detailed intestigations on the crustal structural feature around the middle part of the Tanlu fault, considerable disparity in poisson ratios is found in the western and eastern sides of the Tanlu fault. Moreover, an obvious Moho offset is coincidently observed at the same surface location. A reasonable density model for the Tanlu fault area is also derived by simulating the observed gravity variations. Both receiver function study and gravity anomaly modeling suggest that the crustal difference between the western and eastern sides of the Tanlu fault is mainly resulted from their different compositions.With common conversion point imaging of receiver functions, I estimate the depths of the upper and lower boundaries of the mantle transition zone, i.e., the 410 and 660 km discontinuities, beneath most part of the North China continent The thickness of the transition zone (TTZ) in the study area is calculated by subtracting the depth of .410 km discontinuity from that of the 660km discontinuity. The resultant TTZ is 10-15 km larger in the east than in the west of the study area. Phase transitions at the 410 km and the 660 km discontinuities are known to have different Clapeyron slopes. Therefore, the TTZ is sensitive to the temperature changes in the transition zone. Previous studies have shown that the TTZ would be smaller in the mantle plume areas and become larger when the remnants of subducted slabs are present The hypothesis of mantle plume cannot give a reasonable interpretation to the observed TTZ beneath North China, Instead, the receiver function imaging results favor a dynamic model that correlates the thermal structure of the mantle transition zone and associated upper mantle dynamics of North China to the Pacific plate subduction process.
Resumo:
In low-level vision, the representation of scene properties such as shape, albedo, etc., are very high dimensional as they have to describe complicated structures. The approach proposed here is to let the image itself bear as much of the representational burden as possible. In many situations, scene and image are closely related and it is possible to find a functional relationship between them. The scene information can be represented in reference to the image where the functional specifies how to translate the image into the associated scene. We illustrate the use of this representation for encoding shape information. We show how this representation has appealing properties such as locality and slow variation across space and scale. These properties provide a way of improving shape estimates coming from other sources of information like stereo.
Resumo:
A common objective in learning a model from data is to recover its network structure, while the model parameters are of minor interest. For example, we may wish to recover regulatory networks from high-throughput data sources. In this paper we examine how Bayesian regularization using a Dirichlet prior over the model parameters affects the learned model structure in a domain with discrete variables. Surprisingly, a weak prior in the sense of smaller equivalent sample size leads to a strong regularization of the model structure (sparse graph) given a sufficiently large data set. In particular, the empty graph is obtained in the limit of a vanishing strength of prior belief. This is diametrically opposite to what one may expect in this limit, namely the complete graph from an (unregularized) maximum likelihood estimate. Since the prior affects the parameters as expected, the prior strength balances a "trade-off" between regularizing the parameters or the structure of the model. We demonstrate the benefits of optimizing this trade-off in the sense of predictive accuracy.
Resumo:
Example-based methods are effective for parameter estimation problems when the underlying system is simple or the dimensionality of the input is low. For complex and high-dimensional problems such as pose estimation, the number of required examples and the computational complexity rapidly becme prohibitively high. We introduce a new algorithm that learns a set of hashing functions that efficiently index examples relevant to a particular estimation task. Our algorithm extends a recently developed method for locality-sensitive hashing, which finds approximate neighbors in time sublinear in the number of examples. This method depends critically on the choice of hash functions; we show how to find the set of hash functions that are optimally relevant to a particular estimation problem. Experiments demonstrate that the resulting algorithm, which we call Parameter-Sensitive Hashing, can rapidly and accurately estimate the articulated pose of human figures from a large database of example images.
Resumo:
On October 19-22, 1997 the Second PHANToM Users Group Workshop was held at the MIT Endicott House in Dedham, Massachusetts. Designed as a forum for sharing results and insights, the workshop was attended by more than 60 participants from 7 countries. These proceedings report on workshop presentations in diverse areas including rigid and compliant rendering, tool kits, development environments, techniques for scientific data visualization, multi-modal issues and a programming tutorial.
Resumo:
These proceedings summarize the results of the First PHANToM User's Group Workshop held September 27-30, 1996 MIT. The goal of the workshop was to bring together a group of active users of the PHANToM Haptic Interface to discuss the scientific and engineering challenges involved in bringing haptics into widespread use, and to explore the future possibilities of this exciting technology. With over 50 attendees and 25 presentations the workshop provided the first large forum for users of a common haptic interface to share results and engage in collaborative discussions. Short papers from the presenters are contained herein and address the following topics: Research Effort Overviews, Displays and Effects, Applications in Teleoperation and Training, Tools for Simulated Worlds and, Data Visualization.
Resumo:
We constructed a parallelizing compiler that utilizes partial evaluation to achieve efficient parallel object code from very high-level data independent source programs. On several important scientific applications, the compiler attains parallel performance equivalent to or better than the best observed results from the manual restructuring of code. This is the first attempt to capitalize on partial evaluation's ability to expose low-level parallelism. New static scheduling techniques are used to utilize the fine-grained parallelism of the computations. The compiler maps the computation graph resulting from partial evaluation onto the Supercomputer Toolkit, an eight VLIW processor parallel computer.
Resumo:
An investigation in innovation management and entrepreneurial management is conducted in this thesis. The aim of the research is to explore changes of innovation styles in the transformation process from a start-up company to a more mature phase of business, to predict in a second step future sustainability and the probability of success. As businesses grow in revenue, corporate size and functional complexity, various triggers, supporters and drivers affect innovation and company's success. In a comprehensive study more than 200 innovative and technology driven companies have been examined and compared to identify patterns in different performance levels. All of them have been founded under the same formal requirements of the Munich Business Plan Competition -a research approach which allowed a unique snapshot that only long-term studies would be able to provide. The general objective was to identify the correlation between different factors, as well as different dimensions, to incremental and radical innovations realised. The 12 hypothesis were formed to prove have been derived from a comprehensive literature review. The relevant academic and practitioner literature on entrepreneurial, innovation, and knowledge management as well as social network theory revealed that the concept of innovation has evolved significantly over the last decade. A review of over 15 innovation models/frameworks contributed to understand what innovation in context means and what the dimensions are. It appears that the complex theories of innovation can be described by the increasing extent of social ingredients in the explanation of innovativeness. Originally based on tangible forms of capital, and on the necessity of pull and technology push, innovation management is today integrated in a larger system. Therefore, two research instruments have been developed to explore the changes in innovations styles. The Innovation Management Audits (IMA Start-up and IMA Mature) provided statements related to product/service development, innovativeness in various typologies, resources for innovations, innovation capabilities in conjunction to knowledge and management, social networks as well as the measurement of outcomes to generate high-quality data for further exploration. In obtaining results the mature companies have been clustered in the performance level low, average and high, while the start-up companies have been kept as one cluster. Firstly, the analysis exposed that knowledge, the process of acquiring knowledge, interorganisational networks and resources for innovations are the most important driving factors for innovation and success. Secondly, the actual change of the innovation style provides new insights about the importance of focusing on sustaining success and innovation ii 16 key areas. Thirdly, a detailed overview of triggers, supporters and drivers for innovation and success for each dimension support decision makers in putting their company in the right direction. Fourthly, a critical review of contemporary strategic management in conjunction to the findings provides recommendation of how to apply well-known management tools. Last but not least, the Munich cluster is analysed providing an estimation of the success probability of the different performance cluster and start-up companies. For the analysis of the probability of success of the newly developed as well as statistically and qualitative validated ICP Model (Innovativeness, Capabilities & Potential) has been developed and applied. While the model was primarily developed to evaluate the probability of success of companies; it has equal application in the situation to measure innovativeness to identify the impact of various strategic initiatives within small or large enterprises. The main findings of the model are that competitor, and customer orientation and acquiring knowledge important for incremental and radical innovation. Formal and interorganisation networks are important to foster innovation but informal networks appear to be detrimental to innovation. The testing of the ICP model h the long term is recommended as one subject of further research. Another is to investigate some of the more intangible aspects of innovation management such as attitude and motivation of mangers. IV
Resumo:
C.H. Orgill, N.W. Hardy, M.H. Lee, and K.A.I. Sharpe. An application of a multiple agent system for flexible assemble tasks. In Knowledge based envirnments for industrial applications including cooperating expert systems in control. IEE London, 1989.