956 resultados para Methods engineering.


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Supersedes LC 416.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Philosophy of work.--Secret of work.--Duty or motive in work.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

"Study made for Methods Engineering, Course 3262, Cornell University, for term paper, Fall, 1956."

Relevância:

60.00% 60.00%

Publicador:

Resumo:

"Embodies a course given by the writer for a number of years in the mathematical laboratory of the Massachusetts Institute of Technology."

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The expectation-maximization (EM) algorithm has been of considerable interest in recent years as the basis for various algorithms in application areas of neural networks such as pattern recognition. However, there exists some misconceptions concerning its application to neural networks. In this paper, we clarify these misconceptions and consider how the EM algorithm can be adopted to train multilayer perceptron (MLP) and mixture of experts (ME) networks in applications to multiclass classification. We identify some situations where the application of the EM algorithm to train MLP networks may be of limited value and discuss some ways of handling the difficulties. For ME networks, it is reported in the literature that networks trained by the EM algorithm using iteratively reweighted least squares (IRLS) algorithm in the inner loop of the M-step, often performed poorly in multiclass classification. However, we found that the convergence of the IRLS algorithm is stable and that the log likelihood is monotonic increasing when a learning rate smaller than one is adopted. Also, we propose the use of an expectation-conditional maximization (ECM) algorithm to train ME networks. Its performance is demonstrated to be superior to the IRLS algorithm on some simulated and real data sets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A biologically realizable, unsupervised learning rule is described for the online extraction of object features, suitable for solving a range of object recognition tasks. Alterations to the basic learning rule are proposed which allow the rule to better suit the parameters of a given input space. One negative consequence of such modifications is the potential for learning instability. The criteria for such instability are modeled using digital filtering techniques and predicted regions of stability and instability tested. The result is a family of learning rules which can be tailored to the specific environment, improving both convergence times and accuracy over the standard learning rule, while simultaneously insuring learning stability.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The parameterless self-organizing map (PLSOM) is a new neural network algorithm based on the self-organizing map (SOM). It eliminates the need for a learning rate and annealing schemes for learning rate and neighborhood size. We discuss the relative performance of the PLSOM and the SOM and demonstrate some tasks in which the SOM fails but the PLSOM performs satisfactory. Finally we discuss some example applications of the PLSOM and present a proof of ordering under certain limited conditions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this letter, we propose a class of self-stabilizing learning algorithms for minor component analysis (MCA), which includes a few well-known MCA learning algorithms. Self-stabilizing means that the sign of the weight vector length change is independent of the presented input vector. For these algorithms, rigorous global convergence proof is given and the convergence rate is also discussed. By combining the positive properties of these algorithms, a new learning algorithm is proposed which can improve the performance. Simulations are employed to confirm our theoretical results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Due to complex field/tissue interactions, high-field magnetic resonance (MR) images suffer significant image distortions that result in compromised diagnostic quality. A new method that attempts to remove these distortions is proposed in this paper and is based on the use of transceiver-phased arrays. The proposed system uses, in the examples presented herein, a shielded four-element transceive-phased array head coil and involves performing two separate scans of the same slice with each scan using different excitations during transmission. By optimizing the amplitudes and phases for each scan, antipodal signal profiles can be obtained, and by combining both the images together, the image distortion can be reduced several fold. A combined hybrid method of moments (MoM)/finite element method (FEM) and finite-difference time-domain (FDTD) technique is proposed and used to elucidate the concept of the new method and to accurately evaluate the electromagnetic field (EMF) in a human head model. In addition, the proposed method is used in conjunction with the generalized auto-calibrating partially parallel acquisitions (GRAPPA) reconstruction technique to enable rapid imaging of the two scans. Simulation results reported herein for 11-T (470-MHz) brain imaging applications show that the new method with GRAPPA reconstruction theoretically results in improved image quality and that the proposed combined hybrid MoM/FEM and FDTD technique is. suitable for high-field magnetic resonance imaging (MRI) numerical analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In cyber physical system (CPS), computational resources and physical resources are strongly correlated and mutually dependent. Cascading failures occur between coupled networks, cause the system more fragile than single network. Besides widely used metric giant component, we study small cluster (small component) in interdependent networks after cascading failures occur. We first introduce an overview on how small clusters distribute in various single networks. Then we propose a percolation theory based mathematical method to study how small clusters be affected by the interdependence between two coupled networks. We prove that the upper bounds exist for both the fraction and the number of operating small clusters. Without loss of generality, we apply both synthetic network and real network data in simulation to study small clusters under different interdependence models and network topologies. The extensive simulations highlight our findings: except the giant component, considerable proportion of small clusters exists, with the remaining part fragmenting to very tiny pieces or even massive isolated single vertex; no matter how the two networks are tightly coupled, an upper bound exists for the size of small clusters. We also discover that the interdependent small-world networks generally have the highest fractions of operating small clusters. Three attack strategies are compared: Inter Degree Priority Attack, Intra Degree Priority Attack and Random Attack. We observe that the fraction of functioning small clusters keeps stable and is independent from the attack strategies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the increasing popularity of utility-oriented computing where the resources are traded as services, efficient management of quality of service (QoS) has become increasingly significant to both service consumers and service providers. In the context of distributed multimedia content adaptation deployment on service-oriented computing, how to ensure the stringent QoS requirements of the content adaptation is a significant and immediate challenge. However, QoS guarantees in the distributed multimedia content adaptation deployment on service-oriented platform context have not been accorded the attention it deserves. In this paper, we address this problem. We formulate the SLA management for distributed multimedia content adaptation deployment on service-oriented computing as an integer programming problem. We propose an SLA management framework that enables the service provider to determine deliverable QoS before settling SLA with potential service consumers to optimize QoS guarantees. We analyzed the performance of the proposed strategy under various conditions in terms of the SLA success rate, rejection rate and impact of the resource data errors on potential violation of the agreed upon SLA. We also compared the proposed SLA management framework with a baseline approach in which the distributed multimedia content adaptation is deployed on a service-oriented platform without SLA consideration. The results of the experiments show that the proposed SLA management framework substantially outperforms the baseline approach confirming that SLA management is a core requirement for the deployment of distributed multimedia content adaptation on service-oriented systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Feature based camera model identification plays an important role for forensics investigations on images. The conventional feature based identification schemes suffer from the problem of unknown models, that is, some images are captured by the camera models previously unknown to the identification system. To address this problem, we propose a new scheme: Source Camera Identification with Unknown models (SCIU). It has the capability of identifying images of the unknown models as well as distinguishing images of the known models. The new SCIU scheme consists of three stages: 1) unknown detection; 2) unknown expansion; and 3) (K+1)-class classification. Unknown detection applies a k-nearest neighbours method to recognize a few sample images of unknown models from the unlabeled images. Unknown expansion further extends the set of unknown sample images using a self-training strategy. Then, we address a specific (K+1)-class classification, in which the sample images of unknown (1-class) and known models (K-class) are combined to train a classifier. In addition, we develop a parameter optimization method for unknown detection, and investigate the stopping criterion for unknown expansion. The experiments carried out on the Dresden image collection confirm the effectiveness of the proposed SCIU scheme. When unknown models present, the identification accuracy of SCIU is significantly better than the four state-of-art methods: 1) multi-class Support Vector Machine (SVM); 2) binary SVM; 3) combined classification framework; and 4) decision boundary carving.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the issues for tour planning applications is to adaptively provide personalized advices for different types of tourists and tour activities. This paper proposes a high level Petri Nets based approach to providing some level of adaptation by implementing adaptive navigation in a tour node space. The new model supports dynamic reordering or removal of tour nodes along a tour path; it supports multiple travel modes and incorporates multimodality within its tour planning logic to derive adaptive tour. Examples are given to demonstrate how to realize adaptive interfaces and personalization. Future directions are also discussed at the end of this paper.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sensor networks are a branch of distributed ad hoc networks with a broad range of applications in surveillance and environment monitoring. In these networks, message exchanges are carried out in a multi-hop manner. Due to resource constraints, security professionals often use lightweight protocols, which do not provide adequate security. Even in the absence of constraints, designing a foolproof set of protocols and codes is almost impossible. This leaves the door open to the worms that take advantage of the vulnerabilities to propagate via exploiting the multi-hop message exchange mechanism. This issue has drawn the attention of security researchers recently. In this paper, we investigate the propagation pattern of information in wireless sensor networks based on an extended theory of epidemiology. We develop a geographical susceptible-infective model for this purpose and analytically derive the dynamics of information propagation. Compared with the previous models, ours is more realistic and is distinguished by two key factors that had been neglected before: 1) the proposed model does not purely rely on epidemic theory but rather binds it with geometrical and spatial constraints of real-world sensor networks and 2) it extends to also model the spread dynamics of conflicting information (e.g., a worm and its patch). We do extensive simulations to show the accuracy of our model and compare it with the previous ones. The findings show the common intuition that the infection source is the best location to start patching from, which is not necessarily right. We show that this depends on many factors, including the time it takes for the patch to be developed, worm/patch characteristics as well as the shape of the network.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Massive computation power and storage capacity of cloud computing systems allow scientists to deploy computation and data intensive applications without infrastructure investment, where large application data sets can be stored in the cloud. Based on the pay-as-you-go model, storage strategies and benchmarking approaches have been developed for cost-effectively storing large volume of generated application data sets in the cloud. However, they are either insufficiently cost-effective for the storage or impractical to be used at runtime. In this paper, toward achieving the minimum cost benchmark, we propose a novel highly cost-effective and practical storage strategy that can automatically decide whether a generated data set should be stored or not at runtime in the cloud. The main focus of this strategy is the local-optimization for the tradeoff between computation and storage, while secondarily also taking users' (optional) preferences on storage into consideration. Both theoretical analysis and simulations conducted on general (random) data sets as well as specific real world applications with Amazon's cost model show that the cost-effectiveness of our strategy is close to or even the same as the minimum cost benchmark, and the efficiency is very high for practical runtime utilization in the cloud.