881 resultados para Compressed workweek.
Resumo:
Fusion ARTMAP is a self-organizing neural network architecture for multi-channel, or multi-sensor, data fusion. Single-channel Fusion ARTMAP is functionally equivalent to Fuzzy ART during unsupervised learning and to Fuzzy ARTMAP during supervised learning. The network has a symmetric organization such that each channel can be dynamically configured to serve as either a data input or a teaching input to the system. An ART module forms a compressed recognition code within each channel. These codes, in turn, become inputs to a single ART system that organizes the global recognition code. When a predictive error occurs, a process called paraellel match tracking simultaneously raises vigilances in multiple ART modules until reset is triggered in one of them. Parallel match tracking hereby resets only that portion of the recognition code with the poorest match, or minimum predictive confidence. This internally controlled selective reset process is a type of credit assignment that creates a parsimoniously connected learned network. Fusion ARTMAP's multi-channel coding is illustrated by simulations of the Quadruped Mammal database.
Resumo:
The distributed outstar, a generalization of the outstar neural network for spatial pattern learning, is introduced. In the outstar, signals from a source node cause weights to learn and recall arbitrary patterns across a target field of nodes. The distributed outstar replaces the outstar source node with a source field of arbitrarily many nodes, whose activity pattern may be arbitrarily distributed or compressed. Learning proceeds according to a principle of atrophy due to disuse, whereby a path weight decreases in joint proportion to the transmitted path signal and the degree of disuse of the target node. During learning, the total signal to a target node converges toward that node's activity level. Weight changes at a node are apportioned according to the distributed pattern of converging signals. Three synaptic transmission functions, by a product rule, a capacity rule, and a threshold rule, are examined for this system. The three rules are computationally equivalent when source field activity is maximally compressed, or winner-take-all. When source field activity is distributed, catastrophic forgetting may occur. Only the threshold rule solves this problem. Analysis of spatial pattern learning by distributed codes thereby leads to the conjecture that the unit of long-term memory in such a system is an adaptive threshold, rather than the multiplicative path weight widely used in neural models.
Resumo:
Fusion ARTMAP is a self-organizing neural network architecture for multi-channel, or multi-sensor, data fusion. Fusion ARTMAP generalizes the fuzzy ARTMAP architecture in order to adaptively classify multi-channel data. The network has a symmetric organization such that each channel can be dynamically configured to serve as either a data input or a teaching input to the system. An ART module forms a compressed recognition code within each channel. These codes, in turn, beco1ne inputs to a single ART system that organizes the global recognition code. When a predictive error occurs, a process called parallel match tracking simultaneously raises vigilances in multiple ART modules until reset is triggered in one of thmn. Parallel match tracking hereby resets only that portion of the recognition code with the poorest match, or minimum predictive confidence. This internally controlled selective reset process is a type of credit assignment that creates a parsimoniously connected learned network.
Resumo:
The recognition of 3-D objects from sequences of their 2-D views is modeled by a family of self-organizing neural architectures, called VIEWNET, that use View Information Encoded With NETworks. VIEWNET incorporates a preprocessor that generates a compressed but 2-D invariant representation of an image, a supervised incremental learning system that classifies the preprocessed representations into 2-D view categories whose outputs arc combined into 3-D invariant object categories, and a working memory that makes a 3-D object prediction by accumulating evidence from 3-D object category nodes as multiple 2-D views are experienced. The simplest VIEWNET achieves high recognition scores without the need to explicitly code the temporal order of 2-D views in working memory. Working memories are also discussed that save memory resources by implicitly coding temporal order in terms of the relative activity of 2-D view category nodes, rather than as explicit 2-D view transitions. Variants of the VIEWNET architecture may also be used for scene understanding by using a preprocessor and classifier that can determine both What objects are in a scene and Where they are located. The present VIEWNET preprocessor includes the CORT-X 2 filter, which discounts the illuminant, regularizes and completes figural boundaries, and suppresses image noise. This boundary segmentation is rendered invariant under 2-D translation, rotation, and dilation by use of a log-polar transform. The invariant spectra undergo Gaussian coarse coding to further reduce noise and 3-D foreshortening effects, and to increase generalization. These compressed codes are input into the classifier, a supervised learning system based on the fuzzy ARTMAP algorithm. Fuzzy ARTMAP learns 2-D view categories that are invariant under 2-D image translation, rotation, and dilation as well as 3-D image transformations that do not cause a predictive error. Evidence from sequence of 2-D view categories converges at 3-D object nodes that generate a response invariant under changes of 2-D view. These 3-D object nodes input to a working memory that accumulates evidence over time to improve object recognition. ln the simplest working memory, each occurrence (nonoccurrence) of a 2-D view category increases (decreases) the corresponding node's activity in working memory. The maximally active node is used to predict the 3-D object. Recognition is studied with noisy and clean image using slow and fast learning. Slow learning at the fuzzy ARTMAP map field is adapted to learn the conditional probability of the 3-D object given the selected 2-D view category. VIEWNET is demonstrated on an MIT Lincoln Laboratory database of l28x128 2-D views of aircraft with and without additive noise. A recognition rate of up to 90% is achieved with one 2-D view and of up to 98.5% correct with three 2-D views. The properties of 2-D view and 3-D object category nodes are compared with those of cells in monkey inferotemporal cortex.
Resumo:
It is a neural network truth universally acknowledged, that the signal transmitted to a target node must be equal to the product of the path signal times a weight. Analysis of catastrophic forgetting by distributed codes leads to the unexpected conclusion that this universal synaptic transmission rule may not be optimal in certain neural networks. The distributed outstar, a network designed to support stable codes with fast or slow learning, generalizes the outstar network for spatial pattern learning. In the outstar, signals from a source node cause weights to learn and recall arbitrary patterns across a target field of nodes. The distributed outstar replaces the outstar source node with a source field, of arbitrarily many nodes, where the activity pattern may be arbitrarily distributed or compressed. Learning proceeds according to a principle of atrophy due to disuse whereby a path weight decreases in joint proportion to the transmittcd path signal and the degree of disuse of the target node. During learning, the total signal to a target node converges toward that node's activity level. Weight changes at a node are apportioned according to the distributed pattern of converging signals three types of synaptic transmission, a product rule, a capacity rule, and a threshold rule, are examined for this system. The three rules are computationally equivalent when source field activity is maximally compressed, or winner-take-all when source field activity is distributed, catastrophic forgetting may occur. Only the threshold rule solves this problem. Analysis of spatial pattern learning by distributed codes thereby leads to the conjecture that the optimal unit of long-term memory in such a system is a subtractive threshold, rather than a multiplicative weight.
Resumo:
The thesis initially gives an overview of the wave industry and the current state of some of the leading technologies as well as the energy storage systems that are inherently part of the power take-off mechanism. The benefits of electrical energy storage systems for wave energy converters are then outlined as well as the key parameters required from them. The options for storage systems are investigated and the reasons for examining supercapacitors and lithium-ion batteries in more detail are shown. The thesis then focusses on a particular type of offshore wave energy converter in its analysis, the backward bent duct buoy employing a Wells turbine. Variable speed strategies from the research literature which make use of the energy stored in the turbine inertia are examined for this system, and based on this analysis an appropriate scheme is selected. A supercapacitor power smoothing approach is presented in conjunction with the variable speed strategy. As long component lifetime is a requirement for offshore wave energy converters, a computer-controlled test rig has been built to validate supercapacitor lifetimes to manufacturer’s specifications. The test rig is also utilised to determine the effect of temperature on supercapacitors, and determine application lifetime. Cycle testing is carried out on individual supercapacitors at room temperature, and also at rated temperature utilising a thermal chamber and equipment programmed through the general purpose interface bus by Matlab. Application testing is carried out using time-compressed scaled-power profiles from the model to allow a comparison of lifetime degradation. Further applications of supercapacitors in offshore wave energy converters are then explored. These include start-up of the non-self-starting Wells turbine, and low-voltage ride-through examined to the limits specified in the Irish grid code for wind turbines. These applications are investigated with a more complete model of the system that includes a detailed back-to-back converter coupling a permanent magnet synchronous generator to the grid. Supercapacitors have been utilised in combination with battery systems for many applications to aid with peak power requirements and have been shown to improve the performance of these energy storage systems. The design, implementation, and construction of coupling a 5 kW h lithium-ion battery to a microgrid are described. The high voltage battery employed a continuous power rating of 10 kW and was designed for the future EV market with a controller area network interface. This build gives a general insight to some of the engineering, planning, safety, and cost requirements of implementing a high power energy storage system near or on an offshore device for interface to a microgrid or grid.
Resumo:
This paper focuses on the nature of jamming, as seen in two-dimensional frictional granular systems consisting of photoelastic particles. The photoelastic technique is unique at this time, in its capability to provide detailed particle-scale information on forces and kinematic quantities such as particle displacements and rotations. These experiments first explore isotropic stress states near point J through measurements of the mean contact number per particle, Z, and the pressure, P as functions of the packing fraction, . In this case, the experiments show some but not all aspects of jamming, as expected on the basis of simulations and models that typically assume conservative, hence frictionless, forces between particles. Specifically, there is a rapid growth in Z, at a reasonable which we identify with as c. It is possible to fit Z and P, to power law expressions in - c above c, and to obtain exponents that are in agreement with simulations and models. However, the experiments differ from theory on several points, as typified by the rounding that is observed in Z and P near c. The application of shear to these same 2D granular systems leads to phenomena that are qualitatively different from the standard picture of jamming. In particular, there is a range of packing fractions below c, where the application of shear strain at constant leads to jammed stress-anisotropic states, i.e. they have a non-zero shear stress, τ. The application of shear strain to an initially isotropically compressed (hence jammed) state, does not lead to an unjammed state per se. Rather, shear strain at constant first leads to an increase of both τ and P. Additional strain leads to a succession of jammed states interspersed with relatively localized failures of the force network leading to other stress-anisotropic states that are jammed at typically somewhat lower stress. The locus of jammed states requires a state space that involves not only and τ, but also P. P, τ, and Z are all hysteretic functions of shear strain for fixed . However, we find that both P and τ are roughly linear functions of Z for strains large enough to jam the system. This implies that these shear-jammed states satisfy a Coulomb like-relation, τ = μP. © 2010 The Royal Society of Chemistry.
Resumo:
Nonradiative coupling between conductive coils is a candidate mechanism for wireless energy transfer applications. In this paper we propose a power relay system based on a near-field metamaterial superlens and present a thorough theoretical analysis of this system. We use time-harmonic circuit formalism to describe all interactions between two coils attached to external circuits and a slab of anisotropic medium with homogeneous permittivity and permeability. The fields of the coils are found in the point-dipole approximation using Sommerfeld integrals which are reduced to standard special functions in the long-wavelength limit. We show that, even with a realistic magnetic loss tangent of order 0.1, the power transfer efficiency with the slab can be an order of magnitude greater than free-space efficiency when the load resistance exceeds a certain threshold value. We also find that the volume occupied by the metamaterial between the coils can be greatly compressed by employing magnetic permeability with a large anisotropy ratio. © 2011 American Physical Society.
Resumo:
This paper introduces the concept of adaptive temporal compressive sensing (CS) for video. We propose a CS algorithm to adapt the compression ratio based on the scene's temporal complexity, computed from the compressed data, without compromising the quality of the reconstructed video. The temporal adaptivity is manifested by manipulating the integration time of the camera, opening the possibility to realtime implementation. The proposed algorithm is a generalized temporal CS approach that can be incorporated with a diverse set of existing hardware systems. © 2013 IEEE.
Resumo:
We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video's temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.
Resumo:
We propose a theoretical framework for predicting the protocol dependence of the jamming transition for frictionless spherical particles that interact via repulsive contact forces. We study isostatic jammed disk packings obtained via two protocols: isotropic compression and simple shear. We show that for frictionless systems, all jammed packings can be obtained via either protocol. However, the probability to obtain a particular jammed packing depends on the packing-generation protocol. We predict the average shear strain required to jam initially unjammed isotropically compressed packings from the density of jammed packings, shape of their basins of attraction, and path traversed in configuration space. We compare our predictions to simulations of shear strain-induced jamming and find quantitative agreement. We also show that the packing fraction range, over which shear strain-induced jamming occurs, tends to zero in the large system limit for frictionless packings with overdamped dynamics.
Resumo:
© 2015 Elsevier Ltd. All rights reserved.Laboratory tests on microscale are reported in which millimeter-sized amorphous silica cubes were kept highly compressed in a liquid environment of de-ionized water solutions with different silica ion concentrations for up to four weeks. Such an arrangement simulates an early evolution of bonds between two sand grains stressed in situ. In-house designed Grain Indenter-Puller apparatus allowed measuring strength of such contacts after 3-4 weeks. Observations reported for the first time confirm a long-existing hypothesis that a stressed contact with microcracks generates silica polymers, forming a bonding structure between the grains on a timescale in the order of a few weeks. Such structure exhibits intergranular tensile force at failure of 1-1.5 mN when aged in solutions containing silica ion concentrations of 200-to 500-ppm. The magnitude of such intergranular force is 2-3 times greater than that of water capillary force between the same grains.
Resumo:
PURPOSE: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D+dual energy+time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. METHODS: The authors approach the 5D reconstruction problem within the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction problem using the split Bregman method and GPU-based implementations of backprojection, reprojection, and kernel regression. Using a preclinical mouse model, the authors apply the proposed algorithm to study myocardial injury following radiation treatment of breast cancer. RESULTS: Quantitative 5D simulations are performed using the MOBY mouse phantom. Twenty data sets (ten cardiac phases, two energies) are reconstructed with 88 μm, isotropic voxels from 450 total projections acquired over a single 360° rotation. In vivo 5D myocardial injury data sets acquired in two mice injected with gold and iodine nanoparticles are also reconstructed with 20 data sets per mouse using the same acquisition parameters (dose: ∼60 mGy). For both the simulations and the in vivo data, the reconstruction quality is sufficient to perform material decomposition into gold and iodine maps to localize the extent of myocardial injury (gold accumulation) and to measure cardiac functional metrics (vascular iodine). Their 5D CT imaging protocol represents a 95% reduction in radiation dose per cardiac phase and energy and a 40-fold decrease in projection sampling time relative to their standard imaging protocol. CONCLUSIONS: Their 5D CT data acquisition and reconstruction protocol efficiently exploits the rank-sparse nature of spectral and temporal CT data to provide high-fidelity reconstruction results without increased radiation dose or sampling time.
Resumo:
This paper considers a special class of flow-shop problems, known as the proportionate flow shop. In such a shop, each job flows through the machines in the same order and has equal processing times on the machines. The processing times of different jobs may be different. It is assumed that all operations of a job may be compressed by the same amount which will incur an additional cost. The objective is to minimize the makespan of the schedule together with a compression cost function which is non-decreasing with respect to the amount of compression. For a bicriterion problem of minimizing the makespan and a linear cost function, an O(n log n) algorithm is developed to construct the Pareto optimal set. For a single criterion problem, an O(n2) algorithm is developed to minimize the sum of the makespan and compression cost. Copyright © 1999 John Wiley & Sons, Ltd.
Resumo:
The purpose of this study was to mathematically characterize the effects of defined experimental parameters (probe speed and the ratio of the probe diameter to the diameter of sample container) on the textural/mechanical properties of model gel systems. In addition, this study examined the applicability of dimensional analysis for the rheological interpretation of textural data in terms of shear stress and rate of shear. Aqueous gels (pH 7) were prepared containing 15% w/w poly(methylvinylether-co-maleic anhydride) and poly(vinylpyrrolidone) (PVP) (0, 3, 6, or 9% w/w). Texture profile analysis (TPA) was performed using a Stable Micro Systems texture analyzer (model TA-XT 2; Surrey, UK) in which an analytical probe was twice compressed into each formulation to a defined depth (15 mm) and at defined rates (1, 3, 5, 8, and 10 mm s-1), allowing a delay period (15 s) between the end of the first and beginning of the second compressions. Flow rheograms were performed using a Carri-Med CSL2-100 rheometer (TA Instruments, Surrey, UK) with parallel plate geometry under controlled shearing stresses at 20.0°?±?0.1°C. All formulations exhibited pseudoplastic flow with no thixotropy. Increasing concentrations of PVP significantly increased formulation hardness, compressibility, adhesiveness, and consistency. Increased hardness, compressibility, and consistency were ascribed to enhanced polymeric entanglements, thereby increasing the resistance to deformation. Increasing probe speed increased formulation hardness in a linear manner, because of the effects of probe speed on probe displacement and surface area. The relationship between formulation hardness and probe displacement was linear and was dependent on probe speed. Furthermore, the proportionality constant (gel strength) increased as a function of PVP concentration. The relationship between formulation hardness and diameter ratio was biphasic and was statistically defined by two linear relationships relating to diameter ratios from 0 to 0.4 and from 0.4 to 0.563. The dramatically increased hardness, associated with diameter ratios in excess of 0.4, was accredited to boundary effects, that is, the effect of the container wall on product flow. Using dimensional analysis, the hardness and probe displacement in TPA were mathematically transformed into corresponding rheological parameters, namely shearing stress and rate of shear, thereby allowing the application of the power law (??=?k?n) to textural data. Importantly, the consistencies (k) of the formulations, calculated using transformed textural data, were statistically similar to those obtained using flow rheometry. In conclusion, this study has, firstly, characterized the relationships between textural data and two key instrumental parameters in TPA and, secondly, described a method by which rheological information may be derived using this technique. This will enable a greater application of TPA for the rheological characterization of pharmaceutical gels and, in addition, will enable efficient interpretation of textural data under different experimental parameters.