912 resultados para pacs: neural computing technologies
Resumo:
With the large developments of the seismic sources theory, computing technologies and survey instruments, we can model and rebuild the rupture process of earthquakes more realistically. On which earthquake sources' properties and tectonic activities law are realized more clearly. The researches in this domain have been done in this paper as follows. Based on the generalized ray method, expressions for displacement on the surface of a half-space due to an arbitrary oriented shear and tensile dislocation are also obtained. Kinematically, fault-normal motion is equivalent to tensile faulting. There is some evidence that such motion occurs in many earthquakes. The expressions for static displacements on the surface of a layered half-space due to static point moment tensor source are given in terms of the generalized reflection and transmission coefficient matrix method. The validity and precision of the new method is illustrated by comparing the consistency of our results with the analytical solution given by Okada's code employing same point source and homogenous half-space model. The computed vertical ground displacement using the moment tensor solution of the Lanchang_Gengma earthquake displays considerable difference with that of a double couple component .The effect of a soft layer at the top of the homogenous half-space on a shallow normal-faulting earthquake is also analyzed. Our results show that more seismic information would be obtained utilizing seismic moment tensor source and layered half-space model. The rupture process of 1999 Chi-Chi, Taiwan, earthquake investigated by using co-seismic surface displacement GPS observations and far field P-wave records. In according to the tectonic analysis and distributions of aftershock, we introduce a three-segment bending fault planes into our model. Both elastic half-space models and layered-earth models to invert the distribution of co-seismic slip along the Chi-Chi earthquake rupture. The results indicate that the shear slip model can not fit horizontal and vertical co-seismic displacements together, unless we add the fault-normal motion (tensile component) in inversions. And then, the Chi Chi earthquake rupture process was obtained by inversion using the seismograms and GPS observations. Fault normal motions determined by inversion, concentrate on the shallow northern bending fault from Fengyuan to Shuangji where the surface earthquake ruptures reveal more complexity and the developed flexural slip folding structures than the other portions of the rupture zone For understanding the perturbation of surface displacements caused by near-surface complex structures, We have taken a numeric test to synthesize and inverse the surface displacements for a pop-up structure that is composed of a main thrust and a back thrust. Our result indicates that the pop-up structure, the typical shallow complex rupture that occurred in the northern bending fault zone form Fengyuan to Shuangji, can be modeled better by a thrust fault added negative tensile component than by a simple thrust fault. We interpret the negative tensile distributions, that concentrate on the shallow northern bending fault from Fengyuan to Shuangji, as a the synthetic effect including the complexities of property and geometry of rupture. The earthquake rupture process also reveal the more spatial and temporal complexities form Fenyuan to SHuangji. According to the three-components teleseismic records, the S-wave velocity structure beneath the 59 teleseismic stations of Taiwan obtained by using the transform function method and the SA techniques. The integrated results, the 3D crustal structure of Taiwan reveal that the thickest part of crustal local in the western Central Range. This conclusion is consistent with the result form the Bouguer gravity anomaly. The orogenic evolution of Taiwan is young period, and the developing foot of Central Range dose not in static balancing. The crustal of Taiwan stays in the course of dynamic equilibrium. The rupture process of 2003)2,24,Jiashi, Xinjiang earthquake was estimated by the finite fault model using far field broadband P wave records of CDSN and IRIS. The results indicate that the earthquake focal is north dip trust fault including some left-lateral strike slip. The focal mechanism of this earthquake is different form that of earthquakes occurred in 1997 and 1998, but similar to that of 1996, Artux, Xinjiang earthquake. We interpreted that the earthquake caused trust fault due to the Tarim basin pushing northward and orogeny of Tianshan mountain. In the end, give a brief of future research subject: Building the Real Time Distribute System for rupture process of Large Earthquakes Based on Internet.
Resumo:
It is in the interests of everybody that the environment is protected. In view of the recent leaps in environmental awareness it would seem timely and sensible, therefore, for people to pool vehicle resources to minimise the damaging impact of emissions. However, this is often contrary to how complex social systems behave – local decisions made by self-interested individuals often have emergent effects that are in the interests of nobody. For software engineers a major challenge is to help facilitate individual decision-making such that individual preferences can be met, which, when accumulated, minimise adverse effects at the level of the transport system. We introduce this general problem through a concrete example based on vehicle-sharing. Firstly, we outline the kind of complex transportation problem that is directly addressed by our technology (CO2y™ - pronounced “cosy”), and also show how this differs from other more basic software solutions. The CO2y™ architecture is then briefly introduced. We outline the practical advantages of the advanced, intelligent software technology that is designed to satisfy a number of individual preference criteria and thereby find appropriate matches within a population of vehicle-share users. An example scenario of use is put forward, i.e., minimisation of grey-fleets within a medium-sized company. Here we comment on some of the underlying assumptions of the scenario, and how in a detailed real-world situation such assumptions might differ between different companies, and individual users. Finally, we summarise the paper, and conclude by outlining how the problem of pooled transportation is likely to benefit from the further application of emergent, nature-inspired computing technologies. These technologies allow systems-level behaviour to be optimised with explicit representation of individual actors. With these techniques we hope to make real progress in facing the complexity challenges that transportation problems produce.
Resumo:
The advent of virtualization and cloud computing technologies necessitates the development of effective mechanisms for the estimation and reservation of resources needed by content providers to deliver large numbers of video-on-demand (VOD) streams through the cloud. Unfortunately, capacity planning for the QoS-constrained delivery of a large number of VOD streams is inherently difficult as VBR encoding schemes exhibit significant bandwidth variability. In this paper, we present a novel resource management scheme to make such allocation decisions using a mixture of per-stream reservations and an aggregate reservation, shared across all streams to accommodate peak demands. The shared reservation provides capacity slack that enables statistical multiplexing of peak rates, while assuring analytically bounded frame-drop probabilities, which can be adjusted by trading off buffer space (and consequently delay) and bandwidth. Our two-tiered bandwidth allocation scheme enables the delivery of any set of streams with less bandwidth (or equivalently with higher link utilization) than state-of-the-art deterministic smoothing approaches. The algorithm underlying our proposed frame-work uses three per-stream parameters and is linear in the number of servers, making it particularly well suited for use in an on-line setting. We present results from extensive trace-driven simulations, which confirm the efficiency of our scheme especially for small buffer sizes and delay bounds, and which underscore the significant realizable bandwidth savings, typically yielding losses that are an order of magnitude or more below our analytically derived bounds.
Resumo:
Wireless Inertial Measurement Units (WIMUs) combine motion sensing, processing & communications functionsin a single device. Data gathered using these sensors has the potential to be converted into high quality motion data. By outfitting a subject with multiple WIMUs full motion data can begathered. With a potential cost of ownership several orders of magnitude less than traditional camera based motion capture, WIMU systems have potential to be crucially important in supplementing or replacing traditional motion capture and opening up entirely new application areas and potential markets particularly in the rehabilitative, sports & at-home healthcarespaces. Currently WIMUs are underutilized in these areas. A major barrier to adoption is perceived complexity. Sample rates, sensor types & dynamic sensor ranges may need to be adjusted on multiple axes for each device depending on the scenario. As such we present an advanced WIMU in conjunction with a Smart WIMU system to simplify this aspect with 3 usage modes: Manual, Intelligent and Autonomous. Attendees will be able to compare the 3 different modes and see the effects of good andbad set-ups on the quality of data gathered in real time.
Resumo:
This paper provides a system description and preliminary results for an ongoing clinical study currently being carried out at the Mid-Western Regional Hospital, Nenagh, Ireland. The goal of the trial is to determine if wireless inertial measurement technology can be employed to identify elderly patients at risk of death or imminent clinical deterioration. The system measures cumulative movement and provides a score that will help provide a robust early warning to clinical staff of clinical deterioration. In addition the study examines some of the logistical barriers to the adoption of wearable wireless technology in front-line medical care.
Resumo:
A higher order version of the Hopfield neural network is presented which will perform a simple vector quantisation or clustering function. This model requires no penalty terms to impose constraints in the Hopfield energy, in contrast to the usual one where the energy involves only terms quadratic in the state vector. The energy function is shown to have no local minima within the unit hypercube of the state vector so the network only converges to valid final states. Optimisation trials show that the network can consistently find optimal clusterings for small, trial problems and near optimal ones for a large data set consisting of the intensity values from the digitised, grey-level image.
Resumo:
The utilization of the computational Grid processor network has become a common method for researchers and scientists without access to local processor clusters to avail of the benefits of parallel processing for compute-intensive applications. As a result, this demand requires effective and efficient dynamic allocation of available resources. Although static scheduling and allocation techniques have proved effective, the dynamic nature of the Grid requires innovative techniques for reacting to change and maintaining stability for users. The dynamic scheduling process requires quite powerful optimization techniques, which can themselves lack the performance required in reaction time for achieving an effective schedule solution. Often there is a trade-off between solution quality and speed in achieving a solution. This paper presents an extension of a technique used in optimization and scheduling which can provide the means of achieving this balance and improves on similar approaches currently published.
Resumo:
This paper describes the application of an improved nonlinear principal component analysis (PCA) to the detection of faults in polymer extrusion processes. Since the processes are complex in nature and nonlinear relationships exist between the recorded variables, an improved nonlinear PCA, which incorporates the radial basis function (RBF) networks and principal curves, is proposed. This algorithm comprises two stages. The first stage involves the use of the serial principal curve to obtain the nonlinear scores and approximated data. The second stage is to construct two RBF networks using a fast recursive algorithm to solve the topology problem in traditional nonlinear PCA. The benefits of this improvement are demonstrated in the practical application to a polymer extrusion process.
Resumo:
Heterogeneous computing technologies, such as multi-core CPUs, GPUs and FPGAs can provide significant performance improvements. However, developing applications for these technologies often results in coupling applications to specific devices, typically through the use of proprietary tools. This paper presents SHEPARD, a compile time and run-time framework that decouples application development from the target platform and enables run-time allocation of tasks to heterogeneous computing devices. Through the use of special annotated functions, called managed tasks, SHEPARD approximates a task's performance on available devices, and coupled with the approximation of current device demand, decides which device can satisfy the task with the lowest overall execution time. Experiments using a task parallel application, based on an in-memory database, demonstrate the opportunity for automatic run-time task allocation to achieve speed-up over a static allocation to a single specific device. © 2014 IEEE.
Resumo:
An education in Physics develops both strong cognitive and practical skills. These are well-matched to the needs of employers, from engineering to banking. Physics provides the foundation for all engineering and scientific disciplines including computing technologies, aerospace, communication, and also biosciences and medicine. In academe, Physics addresses fundamental questions about the universe, the nature of reality, and of the complex socio-economic systems comprising our daily lives. Yet today, there are emerging concerns about Physics education: Secondary school interest in Physics is falling, as is the number of Physics school teachers. There is clearly a crisis in physics education; recent research has identified principal factors. Starting from a review of these factors, and from recommendations of professional bodies, this paper proposes a novel solution – the use of Computer Games to teach physics to school children, to university undergraduates and to teacher-trainees.
Resumo:
Employee collaboration and knowledge sharing is vital for manufacturing organisations wishing to be successful in an ever-changing global market place; Product Development (PD) teams, in particular, rely heavily on these activities to generate innovative designs and enhancements to existing product ranges. To this end, the purpose of this paper is to present the results of a validation study carried out during an Engineering Education Scheme project to confirm the benefits of using bespoke Web 2.0-based groupware to improve employee collaboration and knowledge sharing between dispersed PD teams. The results of a cross-sectional survey concluded that employees would welcome greater usage of social computing technologies. The study confirmed that groupware offers the potential to deliver a more effective collaborative and knowledge sharing environment with additional communication channels on offer. Furthermore, a series of recommended guidelines are presented to show how PD teams, operating in globally dispersed organisations, may use Web 2.0 tools to improve employee collaboration and knowledge sharing.
Resumo:
Content Based Image Retrieval is one of the prominent areas in Computer Vision and Image Processing. Recognition of handwritten characters has been a popular area of research for many years and still remains an open problem. The proposed system uses visual image queries for retrieving similar images from database of Malayalam handwritten characters. Local Binary Pattern (LBP) descriptors of the query images are extracted and those features are compared with the features of the images in database for retrieving desired characters. This system with local binary pattern gives excellent retrieval performance