854 resultados para Multics (Computer operating system)
Resumo:
The technique of energy extraction using groundwater source heat pumps, as a sustainable way of low-grade thermal energy utilization, has widely been used since mid-1990's. Based on the basic theories of groundwater flow and heat transfer and by employing two analytic models, the relationship of the thermal breakthrough time for a production well with the effect factors involved is analyzed and the impact of heat transfer by means of conduction and convection, under different groundwater velocity conditions, on geo-temperature field is discussed.A mathematical model, coupling the equations for groundwater flow with those for heat transfer, was developed. The impact of energy mining using a single well system of supplying and returning water on geo-temperature field under different hydrogeological conditions, well structures, withdraw-and-reinjection rates, and natural groundwater flow velocities was quantitatively simulated using the finite difference simulator HST3D. Theoretical analyses of the simulated results were also made. The simulated results of the single well system indicate that neither the permeability nor the porosity of a homogeneous aquifer has significant effect on the temperature of the production segment provided that the production and injection capability of each well in the aquifers involved can meet the designed value. If there exists a lower permeable interlayer, compared with the main aquifer, between the production and injection segments, the temperature changes of the production segment will decrease. The thicker the interlayer and the lower the interlayer permeability, the longer the thermal breakthrough time of the production segment and the smaller the temperature changes of the production segment. According to the above modeling, it can also be found that with the increase of the aquifer thickness, the distance between the production and injection screens, and/or the regional groundwater flow velocity, and/or the decrease of the production-and-reinjection rate, the temperature changes of the production segment decline. For an aquifer of a constant thickness, continuously increase the screen lengths of production and injection segments may lead to the decrease of the distance between the production and injection screens, and the temperature changes of the production segment will increase, consequently.According to the simulation results of the single well system, the parameters, that can cause significant influence on heat transfer as well as geo-temperature field, were chosen for doublet system simulation. It is indicated that the temperature changes of the pumping well will decrease as the aquifer thickness, the distance between the well pair and/or the screen lengths of the doublet increase. In the case of a low permeable interlayer embedding in the main aquifer, if the screens of the pumping and the injection wells are installed respectively below and above the interlayer, the temperature changes of the pumping well will be smaller than that without the interlay. The lower the permeability of the interlayer, the smaller the temperature changes. The simulation results also indicate that the lower the pumping-and-reinjection rate, the greater the temperature changes of the pumping well. It can also be found that if the producer and the injector are chosen reasonably, the temperature changes of the pumping well will decline as the regional groundwater flow velocity increases. Compared with the case that the groundwater flow direction is perpendicular to the well pair, if the regional flow is directed from the pumping well to the injection well, the temperature changes of the pumping well is relatively smaller.Based on the above simulation study, a case history was conducted using the data from an operating system in Beijing. By means of the conceptual model and the mathematical model, a 3-D simulation model was developed and the hydrogeological parameters and the thermal properties were calibrated. The calibrated model was used to predict the evolution of the geo-temperature field for the next five years. The simulation results indicate that the calibrated model can represent the hydrogeological conditions and the nature of the aquifers. It can also be found that the temperature fronts in high permeable aquifers move very fast and the radiuses of temperature influence are large. Comparatively, the temperature changes in clay layers are smaller and there is an obvious lag of the temperature changes. According to the current energy mining load, the temperature of the pumping wells will increase by 0.7°C at the end of the next five years. The above case study may provide reliable base for the scientific management of the operating system studied.
Resumo:
Guangxi Longtan Hydropower Station is not only a representative project of West Developing and Power Transmission from West to East in China, but also the second Hydropower Station to Three Gorges Project which is under construction in China. There are 770 X 104m3 creeping rock mass on the left bank slope in upper reaches, in which laid 9 water inlet tunnels and some underground plant buildings. Since the 435m high excavated slope threatens the security of the Dam, its deformation and stability is of great importance to the power station.Based on the Autodesk Map2004, Longtan Hydropower Station Monitoring Information System on Left Bank has been basically finished on the whole. Integrating the hydropower station monitoring information into Geographic Information System(GIS) environment, managers and engineers can dynamically gain the deformation information of the slop by query the symbols. By this means, designers can improve the correctness of analysis, and make a strategic and proper decision. Since the system is beneficial to effectively manage the monitoring-data, equitably save the cost of design and safe construction, and decrease the workload of the engineers, it is a successful application to the combination of hydropower station monitoring information management and computer information system technology.At the same time, on the basis of the geological analysis and rock mass toppling deformation and failure mechanism analysis of Longtan engineering left bank slope, the synthetic space-time analysis and influence factors analysis on the surface monitoring data and deep rock mass monitoring data of A-zone on left bank slope are carried on. It shows that the main intrinsic factor that effects the deformation of Zone A is the argillite limestone interbedding toppling structure, and its main external factors are rain and slope excavation. What's more, Degree of Reinforcement Demand(DRD) has been used to evaluate the slop reinforce effect of Zone A on left bank according to the Engineering Geomechanics-mate-Synthetics(EGMS). The result shows that the slop has been effective reinforced, and it is more stable after reinforce.At last, on the basis of contrasting with several forecast models, a synthetic forecast GRAV model has been presented and used to forecast the deformation of zone A on left bank in generating electricity period. The result indicates that GRAV model has good forecast precision, strong stability, and practical valuable reliability.
Resumo:
Humans recognize optical reflectance properties of surfaces such as metal, plastic, or paper from a single image without knowledge of illumination. We develop a machine vision system to perform similar recognition tasks automatically. Reflectance estimation under unknown, arbitrary illumination proves highly underconstrained due to the variety of potential illumination distributions and surface reflectance properties. We have found that the spatial structure of real-world illumination possesses some of the statistical regularities observed in the natural image statistics literature. A human or computer vision system may be able to exploit this prior information to determine the most likely surface reflectance given an observed image. We develop an algorithm for reflectance classification under unknown real-world illumination, which learns relationships between surface reflectance and certain features (statistics) computed from a single observed image. We also develop an automatic feature selection method.
Resumo:
A prototype presentation system base is described. It offers mechanisms, tools, and ready-made parts for building user interfaces. A general user interface model underlies the base, organized around the concept of a presentation: a visible text or graphic for conveying information. Te base and model emphasize domain independence and style independence, to apply to the widest possible range of interfaces. The primitive presentation system model treats the interface as a system of processes maintaining a semantic relation between an application data base and a presentation data base, the symbolic screen description containing presentations. A presenter continually updates the presentation data base from the application data base. The user manipulates presentations with a presentation editor. A recognizer translates the user's presentation manipulation into application data base commands. The primitive presentation system can be extended to model more complex systems by attaching additional presentation systems. In order to illustrate the model's generality and descriptive capabilities, extended model structures for several existing user interfaces are discussed. The base provides support for building the application and presentation data bases, linked together into a single, uniform network, including descriptions of classes of objects as we as the objects themselves. The base provides an initial presentation data base network graphics to continually display it, and editing functions. A variety of tools and mechanisms help create and control presenters and recognizers. To demonstrate the base's utility, three interfaces to an operating system were constructed, embodying different styles: icons, menu, and graphical annotation.
Resumo:
In view of a constant growth of writings on didactic and educational problems it is necessary to create an efficient system of scientific educational information. This system will provide creative teachers with materials that will facilitate the selection and access to materials that will enrich the teachers' methodological base and their own intellectual potential by means of a network of school and pedagogical libraries. Such well-organized and efficiently operating system at the level of the school superintendent's office, whose links will be educational institutions as well as those that improve teaching methods of the teaching staff, may be of great information and practical importance in the present age of rapid transformations. It will become an instrument that will make contact with pedagogical writings and improvement of qualifications of the teaching staff possible.
Resumo:
A probabilistic, nonlinear supervised learning model is proposed: the Specialized Mappings Architecture (SMA). The SMA employs a set of several forward mapping functions that are estimated automatically from training data. Each specialized function maps certain domains of the input space (e.g., image features) onto the output space (e.g., articulated body parameters). The SMA can model ambiguous, one-to-many mappings that may yield multiple valid output hypotheses. Once learned, the mapping functions generate a set of output hypotheses for a given input via a statistical inference procedure. The SMA inference procedure incorporates an inverse mapping or feedback function in evaluating the likelihood of each of the hypothesis. Possible feedback functions include computer graphics rendering routines that can generate images for given hypotheses. The SMA employs a variant of the Expectation-Maximization algorithm for simultaneous learning of the specialized domains along with the mapping functions, and approximate strategies for inference. The framework is demonstrated in a computer vision system that can estimate the articulated pose parameters of a human’s body or hands, given silhouettes from a single image. The accuracy and stability of the SMA are also tested using synthetic images of human bodies and hands, where ground truth is known.
Resumo:
A fundamental task of vision systems is to infer the state of the world given some form of visual observations. From a computational perspective, this often involves facing an ill-posed problem; e.g., information is lost via projection of the 3D world into a 2D image. Solution of an ill-posed problem requires additional information, usually provided as a model of the underlying process. It is important that the model be both computationally feasible as well as theoretically well-founded. In this thesis, a probabilistic, nonlinear supervised computational learning model is proposed: the Specialized Mappings Architecture (SMA). The SMA framework is demonstrated in a computer vision system that can estimate the articulated pose parameters of a human body or human hands, given images obtained via one or more uncalibrated cameras. The SMA consists of several specialized forward mapping functions that are estimated automatically from training data, and a possibly known feedback function. Each specialized function maps certain domains of the input space (e.g., image features) onto the output space (e.g., articulated body parameters). A probabilistic model for the architecture is first formalized. Solutions to key algorithmic problems are then derived: simultaneous learning of the specialized domains along with the mapping functions, as well as performing inference given inputs and a feedback function. The SMA employs a variant of the Expectation-Maximization algorithm and approximate inference. The approach allows the use of alternative conditional independence assumptions for learning and inference, which are derived from a forward model and a feedback model. Experimental validation of the proposed approach is conducted in the task of estimating articulated body pose from image silhouettes. Accuracy and stability of the SMA framework is tested using artificial data sets, as well as synthetic and real video sequences of human bodies and hands.
Resumo:
We designed the Eyebrow-Clicker, a camera-based human computer interface system that implements a new form of binary switch. When the user raises his or her eyebrows, the binary switch is activated and a selection command is issued. The Eyebrow-Clicker thus replaces the "click" functionality of a mouse. The system initializes itself by detecting the user's eyes and eyebrows, tracks these features at frame rate, and recovers in the event of errors. The initialization uses the natural blinking of the human eye to select suitable templates for tracking. Once execution has begun, a user therefore never has to restart the program or even touch the computer. In our experiments with human-computer interaction software, the system successfully determined 93% of the time when a user raised his eyebrows.
Resumo:
Grey Level Co-occurrence Matrix (GLCM), one of the best known tool for texture analysis, estimates image properties related to second-order statistics. These image properties commonly known as Haralick texture features can be used for image classification, image segmentation, and remote sensing applications. However, their computations are highly intensive especially for very large images such as medical ones. Therefore, methods to accelerate their computations are highly desired. This paper proposes the use of programmable hardware to accelerate the calculation of GLCM and Haralick texture features. Further, as an example of the speedup offered by programmable logic, a multispectral computer vision system for automatic diagnosis of prostatic cancer has been implemented. The performance is then compared against a microprocessor based solution.
Resumo:
This paper introduces an automated computer- assisted system for the diagnosis of cervical intraepithelial neoplasia (CIN) using ultra-large cervical histological digital slides. The system contains two parts: the segmentation of squamous epithelium and the diagnosis of CIN. For the segmentation, to reduce processing time, a multiresolution method is developed. The squamous epithelium layer is first segmented at a low (2X) resolution. The boundaries are further fine tuned at a higher (20X) resolution. The block-based segmentation method uses robust texture feature vectors in combination with support vector machines (SVMs) to perform classification. Medical rules are finally applied. In testing, segmentation using 31 digital slides achieves 94.25% accuracy. For the diagnosis of CIN, changes in nuclei structure and morphology along lines perpendicular to the main axis of the squamous epithelium are quantified and classified. Using multi-category SVM, perpendicular lines are classified into Normal, CIN I, CIN II, and CIN III. The robustness of the system in term of regional diagnosis is measured against pathologists' diagnoses and inter-observer variability between two pathologists is considered. Initial results suggest that the system has potential as a tool both to assist in pathologists' diagnoses, and in training.
Resumo:
In a human-computer dialogue system, the dialogue strategy can range from very restrictive to highly flexible. Each specific dialogue style has its pros and cons and a dialogue system needs to select the most appropriate style for a given user. During the course of interaction, the dialogue style can change based on a user’s response and the system observation of the user. This allows a dialogue system to understand a user better and provide a more suitable way of communication. Since measures of the quality of the user’s interaction with the system can be incomplete and uncertain, frameworks for reasoning with uncertain and incomplete information can help the system make better decisions when it chooses a dialogue strategy. In this paper, we investigate how to select a dialogue strategy based on aggregating the factors detected during the interaction with the user. For this purpose, we use probabilistic logic programming (PLP) to model probabilistic knowledge about how these factors will affect the degree of freedom of a dialogue. When a dialogue system needs to know which strategy is more suitable, an appropriate query can be executed against the PLP and a probabilistic solution with a degree of satisfaction is returned. The degree of satisfaction reveals how much the system can trust the probability attached to the solution.
Resumo:
This paper presents a novel method that leverages reasoning capabilities in a computer vision system dedicated to human action recognition. The proposed methodology is decomposed into two stages. First, a machine learning based algorithm - known as bag of words - gives a first estimate of action classification from video sequences, by performing an image feature analysis. Those results are afterward passed to a common-sense reasoning system, which analyses, selects and corrects the initial estimation yielded by the machine learning algorithm. This second stage resorts to the knowledge implicit in the rationality that motivates human behaviour. Experiments are performed in realistic conditions, where poor recognition rates by the machine learning techniques are significantly improved by the second stage in which common-sense knowledge and reasoning capabilities have been leveraged. This demonstrates the value of integrating common-sense capabilities into a computer vision pipeline. © 2012 Elsevier B.V. All rights reserved.
Resumo:
Smart Spaces, Ambient Intelligence, and Ambient Assisted Living are environmental paradigms that strongly depend on their capability to recognize human actions. While most solutions rest on sensor value interpretations and video analysis applications, few have realized the importance of incorporating common-sense capabilities to support the recognition process. Unfortunately, human action recognition cannot be successfully accomplished by only analyzing body postures. On the contrary, this task should be supported by profound knowledge of human agency nature and its tight connection to the reasons and motivations that explain it. The combination of this knowledge and the knowledge about how the world works is essential for recognizing and understanding human actions without committing common-senseless mistakes. This work demonstrates the impact that episodic reasoning has in improving the accuracy of a computer vision system for human action recognition. This work also presents formalization, implementation, and evaluation details of the knowledge model that supports the episodic reasoning.
Resumo:
In this paper, we propose a design paradigm for energy efficient and variation-aware operation of next-generation multicore heterogeneous platforms. The main idea behind the proposed approach lies on the observation that not all operations are equally important in shaping the output quality of various applications and of the overall system. Based on such an observation, we suggest that all levels of the software design stack, including the programming model, compiler, operating system (OS) and run-time system should identify the critical tasks and ensure correct operation of such tasks by assigning them to dynamically adjusted reliable cores/units. Specifically, based on error rates and operating conditions identified by a sense-and-adapt (SeA) unit, the OS selects and sets the right mode of operation of the overall system. The run-time system identifies the critical/less-critical tasks based on special directives and schedules them to the appropriate units that are dynamically adjusted for highly-accurate/approximate operation by tuning their voltage/frequency. Units that execute less significant operations can operate at voltages less than what is required for correct operation and consume less power, if required, since such tasks do not need to be always exact as opposed to the critical ones. Such scheme can lead to energy efficient and reliable operation, while reducing the design cost and overheads of conventional circuit/micro-architecture level techniques.
Resumo:
This paper evaluates the viability of user-level software management of a hybrid DRAM/NVM main memory system. We propose an operating system (OS) and programming interface to place data from within the user application. We present a profiling tool to help programmers decide on the placement of application data in hybrid memory systems. Cycle-accurate simulation of modified applications confirms that our approach is more energy-efficient than state-of-the- art hardware or OS approaches at equivalent performance. Moreover, our results are validated on several candidate NVM technologies and a wide set of 14 benchmarks.
The key observation behind this work is that, for the work- loads we evaluated, application objects are too short-lived to motivate migration. Utilizing this property significantly reduces the hardware complexity of hybrid memory systems.