932 resultados para Dynamic performance
Resumo:
Recent scholarly discussion on open innovation put forward the notion that an organisation's ability to internalise external knowledge and learn from various sources in undertaking new product development is crucial to its competitive performance. Nevertheless, little attention has been paid to how growth-oriented small firms identify and exploit entrepreneurial opportunities (i.e. take entrepreneurial action) related to such development, in an open innovation context, from a social learning perspective. This chapter, based on an instrumental case-firm, demonstrates analytically how learning as entrepreneurial action takes place, drawing on situated learning theory. It is argued that such learning is dynamic in nature and is founded on specific organising principles that foster both inter- and intracommunal learning. © 2012, IGI Global.
Resumo:
Four patients that had received an anterior cingulotomy (ACING) and five patients that had received both an ACING and an anterior capsulotomy (ACAPS) as an intervention for chronic, treatment refractory depression were presented with a series of dynamic emotional stimuli and invited to identify the emotion portrayed. Their performance was compared with that of a group of non-surgically treated patients with major depression (n = 17) and with a group of matched, never-depressed controls (n = 22). At the time of testing, four of the nine neurosurgery patients had recovered from their depressive episode, whereas five remained depressed. Analysis of emotion recognition accuracy revealed no significant differences between depressed and non-depressed neurosurgically treated patients. Similarly, no significant differences were observed between the patients treated with ACING alone and those treated with both ACING and ACAPS. Comparison of the emotion recognition accuracy of the neurosurgically treated patients and the depressed and healthy control groups revealed that the surgically treated patients exhibited a general impairment in their recognition accuracy compared to healthy controls. Regression analysis revealed that participants' emotion recognition accuracy was predicted by the number of errors they made on the Stroop colour-naming task. It is plausible that the observed deficit in emotion recognition accuracy was a consequence of impaired attentional control, which may have been a result of the surgical lesions to the anterior cingulate cortex. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
How speech is separated perceptually from other speech remains poorly understood. Recent research indicates that the ability of an extraneous formant to impair intelligibility depends on the variation of its frequency contour. This study explored the effects of manipulating the depth and pattern of that variation. Three formants (F1+F2+F3) constituting synthetic analogues of natural sentences were distributed across the 2 ears, together with a competitor for F2 (F2C) that listeners must reject to optimize recognition (left = F1+F2C; right = F2+F3). The frequency contours of F1 - F3 were each scaled to 50% of their natural depth, with little effect on intelligibility. Competitors were created either by inverting the frequency contour of F2 about its geometric mean (a plausibly speech-like pattern) or using a regular and arbitrary frequency contour (triangle wave, not plausibly speech-like) matched to the average rate and depth of variation for the inverted F2C. Adding a competitor typically reduced intelligibility; this reduction depended on the depth of F2C variation, being greatest for 100%-depth, intermediate for 50%-depth, and least for 0%-depth (constant) F2Cs. This suggests that competitor impact depends on overall depth of frequency variation, not depth relative to that for the target formants. The absence of tuning (i.e., no minimum in intelligibility for the 50% case) suggests that the ability to reject an extraneous formant does not depend on similarity in the depth of formant-frequency variation. Furthermore, triangle-wave competitors were as effective as their more speech-like counterparts, suggesting that the selection of formants from the ensemble also does not depend on speech-specific constraints. © 2014 The Author(s).
Resumo:
Supply chain formation (SCF) is the process of determining the set of participants and exchange relationships within a network with the goal of setting up a supply chain that meets some predefined social objective. Many proposed solutions for the SCF problem rely on centralized computation, which presents a single point of failure and can also lead to problems with scalability. Decentralized techniques that aid supply chain emergence offer a more robust and scalable approach by allowing participants to deliberate between themselves about the structure of the optimal supply chain. Current decentralized supply chain emergence mechanisms are only able to deal with simplistic scenarios in which goods are produced and traded in single units only and without taking into account production capacities or input-output ratios other than 1:1. In this paper, we demonstrate the performance of a graphical inference technique, max-sum loopy belief propagation (LBP), in a complex multiunit unit supply chain emergence scenario which models additional constraints such as production capacities and input-to-output ratios. We also provide results demonstrating the performance of LBP in dynamic environments, where the properties and composition of participants are altered as the algorithm is running. Our results suggest that max-sum LBP produces consistently strong solutions on a variety of network structures in a multiunit problem scenario, and that performance tends not to be affected by on-the-fly changes to the properties or composition of participants.
Resumo:
Computational performance increasingly depends on parallelism, and many systems rely on heterogeneous resources such as GPUs and FPGAs to accelerate computationally intensive applications. However, implementations for such heterogeneous systems are often hand-crafted and optimised to one computation scenario, and it can be challenging to maintain high performance when application parameters change. In this paper, we demonstrate that machine learning can help to dynamically choose parameters for task scheduling and load-balancing based on changing characteristics of the incoming workload. We use a financial option pricing application as a case study. We propose a simulation of processing financial tasks on a heterogeneous system with GPUs and FPGAs, and show how dynamic, on-line optimisations could improve such a system. We compare on-line and batch processing algorithms, and we also consider cases with no dynamic optimisations.
Resumo:
Link quality-based rate adaptation has been widely used for IEEE 802.11 networks. However, network performance is affected by both link quality and random channel access. Selection of transmit modes for optimal link throughput can cause medium access control (MAC) throughput loss. In this paper, we investigate this issue and propose a generalised cross-layer rate adaptation algorithm. It considers jointly link quality and channel access to optimise network throughput. The objective is to examine the potential benefits by cross-layer design. An efficient analytic model is proposed to evaluate rate adaptation algorithms under dynamic channel and multi-user access environments. The proposed algorithm is compared to link throughput optimisation-based algorithm. It is found rate adaptation by optimising link layer throughput can result in large performance loss, which cannot be compensated by the means of optimising MAC access mechanism alone. Results show cross-layer design can achieve consistent and considerable performance gains of up to 20%. It deserves to be exploited in practical design for IEEE 802.11 networks.
Resumo:
The problem of finding the optimal join ordering executing a query to a relational database management system is a combinatorial optimization problem, which makes deterministic exhaustive solution search unacceptable for queries with a great number of joined relations. In this work an adaptive genetic algorithm with dynamic population size is proposed for optimizing large join queries. The performance of the algorithm is compared with that of several classical non-deterministic optimization algorithms. Experiments have been performed optimizing several random queries against a randomly generated data dictionary. The proposed adaptive genetic algorithm with probabilistic selection operator outperforms in a number of test runs the canonical genetic algorithm with Elitist selection as well as two common random search strategies and proves to be a viable alternative to existing non-deterministic optimization approaches.
Resumo:
Permanent deformation and fracture may develop simultaneously when an asphalt mixture is subjected to a compressive load. The objective of this research is to separate viscoplasticity and viscofracture from viscoelasticity so that the permanent deformation and fracture of the asphalt mixtures can be individually and accurately characterized without the influence of viscoelasticity. The undamaged properties of 16 asphalt mixtures that have two binder types, two air void contents, and two aging conditions are first obtained by conducting nondestructive creep tests and nondestructive dynamic modulus tests. Testing results are analyzed by using the linear viscoelastic theory in which the creep compliance and the relaxation modulus are modeled by the Prony model. The dynamic modulus and phase angle of the undamaged asphalt mixtures remained constant with the load cycles. The undamaged asphalt mixtures are then used to perform the destructive dynamic modulus tests in which the dynamic modulus and phase angle of the damaged asphalt mixtures vary with load cycles. This indicates plastic evolution and crack propagation. The growth of cracks is signaled principally by the increase of the phase angle, which occurs only in the tertiary stage. The measured total strain is successfully decomposed into elastic strain, viscoelastic strain, plastic strain, viscoplastic strain, and viscofracture strain by employing the pseudostrain concept and the extended elastic-viscoelastic correspondence principle. The separated viscoplastic strain uses a predictive model to characterize the permanent deformation. The separated viscofracture strain uses a fracture strain model to characterize the fracture of the asphalt mixtures in which the flow number is determined and a crack speed index is proposed. Comparisons of the 16 samples show that aged asphalt mixtures with a low air void content have a better performance, resisting permanent deformation and fracture. © 2012 American Society of Civil Engineers.
Resumo:
This paper explores the design, development and evaluation of a novel real-time auditory display system for accelerated racing driver skills acquisition. The auditory feedback provides concurrent sensory augmentation and performance feedback using a novel target matching design. Real-time, dynamic, tonal audio feedback representing lateral G-force (a proxy for tire slip) is delivered to one ear whilst a target lateral G-force value representing the ‘limit’ of the car, to which the driver aims to drive, is panned to the driver’s other ear; tonal match across both ears signifies that the ‘limit’ has been reached. An evaluation approach was established to measure the efficacy of the audio feedback in terms of performance, workload and drivers’ assessment of self-efficacy. A preliminary human subject study was conducted in a driving simulator environment. Initial results are encouraging, indicating that there is potential for performance gain and driver confidence enhancement based on the audio feedback.
Resumo:
Communication through relay channels in wireless sensor networks can create diversity and consequently improve the robustness of data transmission for ubiquitous computing and networking applications. In this paper, we investigate the performances of relay channels in terms of diversity gain and throughput via both experimental research and theoretical analysis. Two relaying algorithms, dynamic relaying and fixed relaying, are utilised and tested to find out what the relay channels can contribute to system performances. The tests are based on a wireless relay sensor network comprising a source node, a destination node and a couple of relay nodes, and carried out in an indoor environment with rare movement of objects nearby. The tests confirm, in line with the analytical results, that more relay nodes lead to higher diversity gain in the network. The test results also show that the data throughput between the source node and the destination node is enhanced by the presence of the relay nodes. Energy consumption in association with the relaying strategy is also analysed. Copyright © 2009 John Wiley & Sons, Ltd.
Resumo:
Shipboard power systems have different characteristics than the utility power systems. In the Shipboard power system it is crucial that the systems and equipment work at their peak performance levels. One of the most demanding aspects for simulations of the Shipboard Power Systems is to connect the device under test to a real-time simulated dynamic equivalent and in an environment with actual hardware in the Loop (HIL). The real time simulations can be achieved by using multi-distributed modeling concept, in which the global system model is distributed over several processors through a communication link. The advantage of this approach is that it permits the gradual change from pure simulation to actual application. In order to perform system studies in such an environment physical phase variable models of different components of the shipboard power system were developed using operational parameters obtained from finite element (FE) analysis. These models were developed for two types of studies low and high frequency studies. Low frequency studies are used to examine the shipboard power systems behavior under load switching, and faults. High-frequency studies were used to predict abnormal conditions due to overvoltage, and components harmonic behavior. Different experiments were conducted to validate the developed models. The Simulation and experiment results show excellent agreement. The shipboard power systems components behavior under internal faults was investigated using FE analysis. This developed technique is very curial in the Shipboard power systems faults detection due to the lack of comprehensive fault test databases. A wavelet based methodology for feature extraction of the shipboard power systems current signals was developed for harmonic and fault diagnosis studies. This modeling methodology can be utilized to evaluate and predicate the NPS components future behavior in the design stage which will reduce the development cycles, cut overall cost, prevent failures, and test each subsystem exhaustively before integrating it into the system.
Resumo:
Providing transportation system operators and travelers with accurate travel time information allows them to make more informed decisions, yielding benefits for individual travelers and for the entire transportation system. Most existing advanced traveler information systems (ATIS) and advanced traffic management systems (ATMS) use instantaneous travel time values estimated based on the current measurements, assuming that traffic conditions remain constant in the near future. For more effective applications, it has been proposed that ATIS and ATMS should use travel times predicted for short-term future conditions rather than instantaneous travel times measured or estimated for current conditions. ^ This dissertation research investigates short-term freeway travel time prediction using Dynamic Neural Networks (DNN) based on traffic detector data collected by radar traffic detectors installed along a freeway corridor. DNN comprises a class of neural networks that are particularly suitable for predicting variables like travel time, but has not been adequately investigated for this purpose. Before this investigation, it was necessary to identifying methods for data imputation to account for missing data usually encountered when collecting data using traffic detectors. It was also necessary to identify a method to estimate the travel time on the freeway corridor based on data collected using point traffic detectors. A new travel time estimation method referred to as the Piecewise Constant Acceleration Based (PCAB) method was developed and compared with other methods reported in the literatures. The results show that one of the simple travel time estimation methods (the average speed method) can work as well as the PCAB method, and both of them out-perform other methods. This study also compared the travel time prediction performance of three different DNN topologies with different memory setups. The results show that one DNN topology (the time-delay neural networks) out-performs the other two DNN topologies for the investigated prediction problem. This topology also performs slightly better than the simple multilayer perceptron (MLP) neural network topology that has been used in a number of previous studies for travel time prediction.^
Resumo:
Disk drives are the bottleneck in the processing of large amounts of data used in almost all common applications. File systems attempt to reduce this by storing data sequentially on the disk drives, thereby reducing the access latencies. Although this strategy is useful when data is retrieved sequentially, the access patterns in real world workloads is not necessarily sequential and this mismatch results in storage I/O performance degradation. This thesis demonstrates that one way to improve the storage performance is to reorganize data on disk drives in the same way in which it is mostly accessed. We identify two classes of accesses: static, where access patterns do not change over the lifetime of the data and dynamic, where access patterns frequently change over short durations of time, and propose, implement and evaluate layout strategies for each of these. Our strategies are implemented in a way that they can be seamlessly integrated or removed from the system as desired. We evaluate our layout strategies for static policies using tree-structured XML data where accesses to the storage device are mostly of two kinds—parent-to-child or child-to-sibling. Our results show that for a specific class of deep-focused queries, the existing file system layout policy performs better by 5–54X. For the non-deep-focused queries, our native layout mechanism shows an improvement of 3–127X. To improve performance of the dynamic access patterns, we implement a self-optimizing storage system that performs rearranges popular block accesses on a dedicated partition based on the observed workload characteristics. Our evaluation shows an improvement of over 80% in the disk busy times over a range of workloads. These results show that applying the knowledge of data access patterns for allocation decisions can substantially improve the I/O performance.
Resumo:
This dissertation aims to improve the performance of existing assignment-based dynamic origin-destination (O-D) matrix estimation models to successfully apply Intelligent Transportation Systems (ITS) strategies for the purposes of traffic congestion relief and dynamic traffic assignment (DTA) in transportation network modeling. The methodology framework has two advantages over the existing assignment-based dynamic O-D matrix estimation models. First, it combines an initial O-D estimation model into the estimation process to provide a high confidence level of initial input for the dynamic O-D estimation model, which has the potential to improve the final estimation results and reduce the associated computation time. Second, the proposed methodology framework can automatically convert traffic volume deviation to traffic density deviation in the objective function under congested traffic conditions. Traffic density is a better indicator for traffic demand than traffic volume under congested traffic condition, thus the conversion can contribute to improving the estimation performance. The proposed method indicates a better performance than a typical assignment-based estimation model (Zhou et al., 2003) in several case studies. In the case study for I-95 in Miami-Dade County, Florida, the proposed method produces a good result in seven iterations, with a root mean square percentage error (RMSPE) of 0.010 for traffic volume and a RMSPE of 0.283 for speed. In contrast, Zhou's model requires 50 iterations to obtain a RMSPE of 0.023 for volume and a RMSPE of 0.285 for speed. In the case study for Jacksonville, Florida, the proposed method reaches a convergent solution in 16 iterations with a RMSPE of 0.045 for volume and a RMSPE of 0.110 for speed, while Zhou's model needs 10 iterations to obtain the best solution, with a RMSPE of 0.168 for volume and a RMSPE of 0.179 for speed. The successful application of the proposed methodology framework to real road networks demonstrates its ability to provide results both with satisfactory accuracy and within a reasonable time, thus establishing its potential usefulness to support dynamic traffic assignment modeling, ITS systems, and other strategies.
Resumo:
Human scent and human remains detection canines are used to locate living or deceased humans under many circumstances. Human scent canines locate individual humans on the basis of their unique scent profile, while human remains detection canines locate the general scent of decomposing human remains. Scent evidence is often collected by law enforcement agencies using a Scent Transfer Unit, a dynamic headspace concentration device. The goals of this research were to evaluate the STU-100 for the collection of human scent samples, and to apply this method to the collection of living and deceased human samples, and to the creation of canine training aids. The airflow rate and collection material used with the STU-100 were evaluated using a novel scent delivery method. Controlled Odor Mimic Permeation Systems were created containing representative standard compounds delivered at known rates, improving the reproducibility of optimization experiments. Flow rates and collection materials were compared. Higher air flow rates usually yielded significantly less total volatile compounds due to compound breakthrough through the collection material. Collection from polymer and cellulose-based materials demonstrated that the molecular backbone of the material is a factor in the trapping and releasing of compounds. The weave of the material also affects compound collection, as those materials with a tighter weave demonstrated enhanced collection efficiencies. Using the optimized method, volatiles were efficiently collected from living and deceased humans. Replicates of the living human samples showed good reproducibility; however, the odor profiles from individuals were not always distinguishable from one another. Analysis of the human remains samples revealed similarity in the type and ratio of compounds. Two types of prototype training aids were developed utilizing combinations of pure compounds as well as volatiles from actual human samples concentrated onto sorbents, which were subsequently used in field tests. The pseudo scent aids had moderate success in field tests, and the Odor pad aids had significant success. This research demonstrates that the STU-100 is a valuable tool for dog handlers and as a field instrument; however, modifications are warranted in order to improve its performance as a method for instrumental detection.