936 resultados para Concurrent programming
Resumo:
This study examines the role that the size of a victimised organisation and the size of the victim’s loss have on attitudes regarding the acceptance or unacceptance of 12 questionable consumer actions. A sample of 815 American adults rated each scenario on a scale anchored by very acceptable and very unacceptable. It was shown that the size of the victimised organisation tends to influence consumers’ opinions with more disdain directed towards consumers who take advantage of smaller businesses. Similarly, the respondents tended to be more critical of these actions when the loss incurred by the victimised organisation was large. A 2x2 matrix concurrently delineated the nature of the extent to which opinions regarding the 12 actions differed depending upon the mediating variable under scrutiny.
Resumo:
Fast restoration of critical loads and non-black-start generators can significantly reduce the economic losses caused by power system blackouts. In a parallel power system restoration scenario, the sectionalization of restoration subsystems plays a very important role in determining the pickup of critical loads before synchronization. Most existing research mainly focuses on the startup of non-black-start generators. The restoration of critical loads, especially the loads with cold load characteristics, has not yet been addressed in optimizing the subsystem divisions. As a result, sectionalized restoration subsystems cannot achieve the best coordination between the pickup of loads and the ramping of generators. In order to generate sectionalizing strategies considering the pickup of critical loads in parallel power system restoration scenarios, an optimization model considering power system constraints, the characteristics of the cold load pickup and the features of generator startup is proposed in this paper. A bi-level programming approach is employed to solve the proposed sectionalizing model. In the upper level the optimal sectionalizing problem for the restoration subsystems is addressed, while in the lower level the objective is to minimize the outage durations of critical loads. The proposed sectionalizing model has been validated by the New-England 39-bus system and the IEEE 118-bus system. Further comparisons with some existing methods are carried out as well.
Resumo:
This thesis articulates and examines public engagement programming in an emerging, non¬-traditional site. As a practice-led research project, the creative work proposes a site responsive, engagement centric, agile model for curatorial programming that developed out of the dynamic, new media/digital, curatorial practice at QUT's Creative Industries Precinct. The model and its accompanying exegetical framework, Curating in Uncharted Territories, offer a theoretically informed approach to programming, delivering and reporting for curatorial practices in a non¬-traditional sites of public engagement. The research provides the foundation for full development of the model and the basis for further research.
Resumo:
In medical negligence litigation expert evidence has long played a dominant role. The trend towards the use of concurrent expert evidence is now well underway. However, for the lawyers and the doctors involved, the pathway is not yet familiar. Disputes have frequently arisen in the context of pre-hearing expert conclaves, given the adversarial nature of litigation and perhaps fuelled by fears of a less transparent process at this increasingly important stage. This article explains the concurrent expert evidence framework and examines areas of common dispute both in the conclaves and at trial, with a view to providing assistance to legal practitioners working in this area and the medical practitioners called upon to provide expert evidence in such litigation.
Resumo:
We examined parenting behaviors, and their association with concurrent and later child behavior problems. Children with an intellectual disability (ID) were identified from a UK birth cohort (N = 516 at age 5). Compared to parents of children without an ID, parents of children with an ID used discipline less frequently, but reported a more negative relationship with their child. Among children with an ID, discipline, and home atmosphere had no long-term association with behavior problems, whereas relationship quality did: closer relationships were associated with fewer concurrent and later child behavior problems. Increased parent-child conflict was associated with greater concurrent and later behavior problems. Parenting programs in ID could target parent-child relationship quality as a potential mediator of behavioral improvements in children.
Resumo:
In this paper, we look at the concept of reversibility, that is, negating opposites, counterbalances, and actions that can be reversed. Piaget identified reversibility as an indicator of the ability to reason at a concrete operational level. We investigate to what degree novice programmers manifest the ability to work with this concept of reversibility by providing them with a small piece of code and then asking them to write code that undoes the effect of that code. On testing entire cohorts of students in their first year of learning to program, we found an overwhelming majority of them could not cope with such a concept. We then conducted think aloud studies of novices where we observed them working on this task and analyzed their contrasting abilities to deal with it. The results of this study demonstrate the need for better understanding our students' reasoning abilities, and a teaching model aimed at that level of reality.
Resumo:
We consider the problem of controlling a Markov decision process (MDP) with a large state space, so as to minimize average cost. Since it is intractable to compete with the optimal policy for large scale problems, we pursue the more modest goal of competing with a low-dimensional family of policies. We use the dual linear programming formulation of the MDP average cost problem, in which the variable is a stationary distribution over state-action pairs, and we consider a neighborhood of a low-dimensional subset of the set of stationary distributions (defined in terms of state-action features) as the comparison class. We propose a technique based on stochastic convex optimization and give bounds that show that the performance of our algorithm approaches the best achievable by any policy in the comparison class. Most importantly, this result depends on the size of the comparison class, but not on the size of the state space. Preliminary experiments show the effectiveness of the proposed algorithm in a queuing application.
Resumo:
Combining the philosophies of nonlinear model predictive control and approximate dynamic programming, a new suboptimal control design technique is presented in this paper, named as model predictive static programming (MPSP), which is applicable for finite-horizon nonlinear problems with terminal constraints. This technique is computationally efficient, and hence, can possibly be implemented online. The effectiveness of the proposed method is demonstrated by designing an ascent phase guidance scheme for a ballistic missile propelled by solid motors. A comparison study with a conventional gradient method shows that the MPSP solution is quite close to the optimal solution.
Resumo:
This paper presents the programming an FPGA (Field Programmable Gate Array) to emulate the dynamics of DC machines. FPGA allows high speed real time simulation with high precision. The described design includes block diagram representation of DC machine, which contain all arithmetic and logical operations. The real time simulation of the machine in FPGA is controlled by user interfaces they are Keypad interface, LCD display on-line and digital to analog converter. This approach provides emulation of electrical machine by changing the parameters. Separately Exited DC machine implemented and experimental results are presented.
Resumo:
Background: A genetic network can be represented as a directed graph in which a node corresponds to a gene and a directed edge specifies the direction of influence of one gene on another. The reconstruction of such networks from transcript profiling data remains an important yet challenging endeavor. A transcript profile specifies the abundances of many genes in a biological sample of interest. Prevailing strategies for learning the structure of a genetic network from high-dimensional transcript profiling data assume sparsity and linearity. Many methods consider relatively small directed graphs, inferring graphs with up to a few hundred nodes. This work examines large undirected graphs representations of genetic networks, graphs with many thousands of nodes where an undirected edge between two nodes does not indicate the direction of influence, and the problem of estimating the structure of such a sparse linear genetic network (SLGN) from transcript profiling data. Results: The structure learning task is cast as a sparse linear regression problem which is then posed as a LASSO (l1-constrained fitting) problem and solved finally by formulating a Linear Program (LP). A bound on the Generalization Error of this approach is given in terms of the Leave-One-Out Error. The accuracy and utility of LP-SLGNs is assessed quantitatively and qualitatively using simulated and real data. The Dialogue for Reverse Engineering Assessments and Methods (DREAM) initiative provides gold standard data sets and evaluation metrics that enable and facilitate the comparison of algorithms for deducing the structure of networks. The structures of LP-SLGNs estimated from the INSILICO1, INSILICO2 and INSILICO3 simulated DREAM2 data sets are comparable to those proposed by the first and/or second ranked teams in the DREAM2 competition. The structures of LP-SLGNs estimated from two published Saccharomyces cerevisae cell cycle transcript profiling data sets capture known regulatory associations. In each S. cerevisiae LP-SLGN, the number of nodes with a particular degree follows an approximate power law suggesting that its degree distributions is similar to that observed in real-world networks. Inspection of these LP-SLGNs suggests biological hypotheses amenable to experimental verification. Conclusion: A statistically robust and computationally efficient LP-based method for estimating the topology of a large sparse undirected graph from high-dimensional data yields representations of genetic networks that are biologically plausible and useful abstractions of the structures of real genetic networks. Analysis of the statistical and topological properties of learned LP-SLGNs may have practical value; for example, genes with high random walk betweenness, a measure of the centrality of a node in a graph, are good candidates for intervention studies and hence integrated computational – experimental investigations designed to infer more realistic and sophisticated probabilistic directed graphical model representations of genetic networks. The LP-based solutions of the sparse linear regression problem described here may provide a method for learning the structure of transcription factor networks from transcript profiling and transcription factor binding motif data.
Resumo:
Despite great advances in very large scale integrated-circuit design and manufacturing, performance of even the best available high-speed, high-resolution analog-to-digital converter (ADC) is known to deteriorate while acquiring fast-rising, high-frequency, and nonrepetitive waveforms. Waveform digitizers (ADCs) used in high-voltage impulse recordings and measurements are invariably subjected to such waveforms. Errors resulting from a lowered ADC performance can be unacceptably high, especially when higher accuracies have to be achieved (e.g., when part of a reference measuring system). Static and dynamic nonlinearities (estimated independently) are vital indices for evaluating performance and suitability of ADCs to be used in such environments. Typically, the estimation of static nonlinearity involves 10-12 h of time or more (for a 12-b ADC) and the acquisition of millions of samples at high input frequencies for dynamic characterization. ADCs with even higher resolution and faster sampling speeds will soon become available. So, there is a need to reduce testing time for evaluating these parameters. This paper proposes a novel and time-efficient method for the simultaneous estimation of static and dynamic nonlinearity from a single test. This is achieved by conceiving a test signal, comprised of a high-frequency sinusoid (which addresses dynamic assessment) modulated by a low-frequency ramp (relevant to the static part). Details of implementation and results on two digitizers are presented and compared with nonlinearities determined by the existing standardized approaches. Good agreement in results and time savings achievable indicates its suitability.
Resumo:
Despite great advances in very large scale integrated-circuit design and manufacturing, performance of even the best available high-speed, high-resolution analog-to-digital converter (ADC) is known to deteriorate while acquiring fast-rising, high-frequency, and nonrepetitive waveforms. Waveform digitizers (ADCs) used in high-voltage impulse recordings and measurements are invariably subjected to such waveforms. Errors resulting from a lowered ADC performance can be unacceptably high, especially when higher accuracies have to be achieved (e.g., when part of a reference measuring system). Static and dynamic nonlinearities (estimated independently) are vital indices for evaluating performance and suitability of ADCs to be used in such environments. Typically, the estimation of static nonlinearity involves 10-12 h of time or more (for a 12-b ADC) and the acquisition of millions of samples at high input frequencies for dynamic characterization. ADCs with even higher resolution and faster sampling speeds will soon become available. So, there is a need to reduce testing time for evaluating these parameters. This paper proposes a novel and time-efficient method for the simultaneous estimation of static and dynamic nonlinearity from a single test. This is achieved by conceiving a test signal, comprised of a high-frequency sinusoid (which addresses dynamic assessment) modulated by a low-frequency ramp (relevant to the static part). Details of implementation and results on two digitizers are presented and compared with nonlinearities determined by the existing standardized approaches. Good agreement in results and time savings achievable indicates its suitability.
Resumo:
A new method of specifying the syntax of programming languages, known as hierarchical language specifications (HLS), is proposed. Efficient parallel algorithms for parsing languages generated by HLS are presented. These algorithms run on an exclusive-read exclusive-write parallel random-access machine. They require O(n) processors and O(log2n) time, where n is the length of the string to be parsed. The most important feature of these algorithms is that they do not use a stack.