976 resultados para Diagonal Sum


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The project introduces an application using computer vision for Hand gesture recognition. A camera records a live video stream, from which a snapshot is taken with the help of interface. The system is trained for each type of count hand gestures (one, two, three, four, and five) at least once. After that a test gesture is given to it and the system tries to recognize it.A research was carried out on a number of algorithms that could best differentiate a hand gesture. It was found that the diagonal sum algorithm gave the highest accuracy rate. In the preprocessing phase, a self-developed algorithm removes the background of each training gesture. After that the image is converted into a binary image and the sums of all diagonal elements of the picture are taken. This sum helps us in differentiating and classifying different hand gestures.Previous systems have used data gloves or markers for input in the system. I have no such constraints for using the system. The user can give hand gestures in view of the camera naturally. A completely robust hand gesture recognition system is still under heavy research and development; the implemented system serves as an extendible foundation for future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis focuses mainly on linear algebraic aspects of combinatorics. Let N_t(H) be an incidence matrix with edges versus all subhypergraphs of a complete hypergraph that are isomorphic to H. Richard M. Wilson and the author find the general formula for the Smith normal form or diagonal form of N_t(H) for all simple graphs H and for a very general class of t-uniform hypergraphs H.

As a continuation, the author determines the formula for diagonal forms of integer matrices obtained from other combinatorial structures, including incidence matrices for subgraphs of a complete bipartite graph and inclusion matrices for multisets.

One major application of diagonal forms is in zero-sum Ramsey theory. For instance, Caro's results in zero-sum Ramsey numbers for graphs and Caro and Yuster's results in zero-sum bipartite Ramsey numbers can be reproduced. These results are further generalized to t-uniform hypergraphs. Other applications include signed bipartite graph designs.

Research results on some other problems are also included in this thesis, such as a Ramsey-type problem on equipartitions, Hartman's conjecture on large sets of designs and a matroid theory problem proposed by Welsh.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Let G be a simple graph on n vertices and e(G) edges. Consider the signless Laplacian, Q(G) = D + A, where A is the adjacency matrix and D is the diagonal matrix of the vertices degree of G. Let q1(G) and q2(G) be the first and the second largest eigenvalues of Q(G), respectively, and denote by S+ n the star graph with an additional edge. It is proved that inequality q1(G)+q2(G) e(G)+3 is tighter for the graph S+ n among all firefly graphs and also tighter to S+ n than to the graphs Kk _ Kn−k recently presented by Ashraf, Omidi and Tayfeh-Rezaie. Also, it is conjectured that S+ n minimizes f(G) = e(G) − q1(G) − q2(G) among all graphs G on n vertices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Principal Topic Although corporate entrepreneurship is of vital importance for long-term firm survival and growth (Zahra and Covin, 1995), researchers still struggle with understanding how to manage corporate entrepreneurship activities. Corporate entrepreneurship consists of three parts: innovation, venturing, and renewal processes (Guth and Ginsberg, 1990). Innovation refers to the development of new products, venturing to the creation of new businesses, and renewal to redefining existing businesses (Sharma, and Chrisman, 1999; Verbeke et al., 2007). Although there are many studies focusing on one of these aspects (cf. Burgelman, 1985; Huff et al., 1992), it is very difficult to compare the outcomes of these studies due to differences in contexts, measures, and methodologies. This is a significant lack in our understanding of CE, as firms engage in all three aspects of CE, making it important to compare managerial and organizational antecedents of innovation, venturing and renewal processes. Because factors that may enhance venturing activities may simultaneously inhibit renewal activities. The limited studies that did empirically compare the individual dimensions (cf. Zahra, 1996; Zahra et al., 2000; Yiu and Lau, 2008; Yiu et al., 2007) generally failed to provide a systematic explanation for potential different effects of organizational antecedents on innovation, venturing, and renewal. With this study we aim to investigate the different effects of structural separation and social capital on corporate entrepreneurship activities. The access to existing and the development of new knowledge has been deemed of critical importance in CE-activities (Floyd and Wooldridge, 1999; Covin and Miles, 2007; Katila and Ahuja, 2002). Developing new knowledge can be facilitated by structurally separating corporate entrepreneurial units from mainstream units (cf. Burgelman, 1983; Hill and Rothaermel, 2003; O'Reilly and Tushman, 2004). Existing knowledge and resources are available through networks of social relationships, defined as social capital (Nahapiet and Ghoshal, 1998; Yiu and Lau, 2008). Although social capital has primarily been studied at the organizational level, it might be equally important at top management level (Belliveau et al., 1996). However, little is known about the joint effects of structural separation and integrative mechanisms to provide access to social capital on corporate entrepreneurship. Could these integrative mechanisms for example connect the separated units to facilitate both knowledge creation and sharing? Do these effects differ for innovation, venturing, and renewal processes? Are the effects different for organizational versus top management team integration mechanisms? Corporate entrepreneurship activities have for example been suggested to take place at different levels. Whereas innovation is suggested to be a more bottom-up process, strategic renewal is a more top-down process (Floyd and Lane, 2000; Volberda et al., 2001). Corporate venturing is also a more bottom-up process, but due to the greater required resource commitments relative to innovation, it ventures need to be approved by top management (Burgelman, 1983). As such we will explore the following key research question in this paper: How do social capital and structural separation on organizational and TMT level differentially influence innovation, venturing, and renewal processes? Methodology/Key Propositions We investigated our hypotheses on a final sample of 240 companies in a variety of industries in the Netherlands. All our measures were validated in previous studies. We targeted a second respondent in each firm to reduce problems with single-rater data (James et al., 1984). We separated the measurement of the independent and the dependent variables in two surveys to create a one-year time lag and reduce potential common method bias (Podsakoff et al., 2003). Results and Implications Consistent with our hypotheses, our results show that configurations of structural separation and integrative mechanisms have different effects on the three aspects of corporate entrepreneurship. Innovation was affected by organizational level mechanisms, renewal by integrative mechanisms on top management team level and venturing by mechanisms on both levels. Surprisingly, our results indicated that integrative mechanisms on top management team level had negative effects on corporate entrepreneurship activities. We believe this paper makes two significant contributions. First, we provide more insight in what the effects of ambidextrous organizational forms (i.e. combinations of differentiation and integration mechanisms) are on venturing, innovation and renewal processes. Our findings show that more valuable insights can be gained by comparing the individual parts of corporate entrepreneurship instead of focusing on the whole. Second, we deliver insights in how management can create a facilitative organizational context for these corporate entrepreneurship activities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In an automotive environment, the performance of a speech recognition system is affected by environmental noise if the speech signal is acquired directly from a microphone. Speech enhancement techniques are therefore necessary to improve the speech recognition performance. In this paper, a field-programmable gate array (FPGA) implementation of dual-microphone delay-and-sum beamforming (DASB) for speech enhancement is presented. As the first step towards a cost-effective solution, the implementation described in this paper uses a relatively high-end FPGA device to facilitate the verification of various design strategies and parameters. Experimental results show that the proposed design can produce output waveforms close to those generated by a theoretical (floating-point) model with modest usage of FPGA resources. Speech recognition experiments are also conducted on enhanced in-car speech waveforms produced by the FPGA in order to compare recognition performance with the floating-point representation running on a PC.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays(FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present the outcomes of a project on the exploration of the use of Field Programmable Gate Arrays (FPGAs) as co-processors for scientific computation. We designed a custom circuit for the pipelined solving of multiple tri-diagonal linear systems. The design is well suited for applications that require many independent tri-diagonal system solves, such as finite difference methods for solving PDEs or applications utilising cubic spline interpolation. The selected solver algorithm was the Tri-Diagonal Matrix Algorithm (TDMA or Thomas Algorithm). Our solver supports user specified precision thought the use of a custom floating point VHDL library supporting addition, subtraction, multiplication and division. The variable precision TDMA solver was tested for correctness in simulation mode. The TDMA pipeline was tested successfully in hardware using a simplified solver model. The details of implementation, the limitations, and future work are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The detection and correction of defects remains among the most time consuming and expensive aspects of software development. Extensive automated testing and code inspections may mitigate their effect, but some code fragments are necessarily more likely to be faulty than others, and automated identification of fault prone modules helps to focus testing and inspections, thus limiting wasted effort and potentially improving detection rates. However, software metrics data is often extremely noisy, with enormous imbalances in the size of the positive and negative classes. In this work, we present a new approach to predictive modelling of fault proneness in software modules, introducing a new feature representation to overcome some of these issues. This rank sum representation offers improved or at worst comparable performance to earlier approaches for standard data sets, and readily allows the user to choose an appropriate trade-off between precision and recall to optimise inspection effort to suit different testing environments. The method is evaluated using the NASA Metrics Data Program (MDP) data sets, and performance is compared with existing studies based on the Support Vector Machine (SVM) and Naïve Bayes (NB) Classifiers, and with our own comprehensive evaluation of these methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reconfigurable computing devices can increase the performance of compute intensive algorithms by implementing application specific co-processor architectures. The power cost for this performance gain is often an order of magnitude less than that of modern CPUs and GPUs. Exploiting the potential of reconfigurable devices such as Field-Programmable Gate Arrays (FPGAs) is typically a complex and tedious hardware engineering task. Re- cently the major FPGA vendors (Altera, and Xilinx) have released their own high-level design tools, which have great potential for rapid development of FPGA based custom accelerators. In this paper, we will evaluate Altera’s OpenCL Software Development Kit, and Xilinx’s Vivado High Level Sythesis tool. These tools will be compared for their per- formance, logic utilisation, and ease of development for the test case of a Tri-diagonal linear system solver.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The sum of k mins protocol was proposed by Hopper and Blum as a protocol for secure human identification. The goal of the protocol is to let an unaided human securely authenticate to a remote server. The main ingredient of the protocol is the sum of k mins problem. The difficulty of solving this problem determines the security of the protocol. In this paper, we show that the sum of k mins problem is NP-Complete and W[1]-Hard. This latter notion relates to fixed parameter intractability. We also discuss the use of the sum of k mins protocol in resource-constrained devices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Universal One-Way Hash Functions (UOWHFs) may be used in place of collision-resistant functions in many public-key cryptographic applications. At Asiacrypt 2004, Hong, Preneel and Lee introduced the stronger security notion of higher order UOWHFs to allow construction of long-input UOWHFs using the Merkle-Damgård domain extender. However, they did not provide any provably secure constructions for higher order UOWHFs. We show that the subset sum hash function is a kth order Universal One-Way Hash Function (hashing n bits to m < n bits) under the Subset Sum assumption for k = O(log m). Therefore we strengthen a previous result of Impagliazzo and Naor, who showed that the subset sum hash function is a UOWHF under the Subset Sum assumption. We believe our result is of theoretical interest; as far as we are aware, it is the first example of a natural and computationally efficient UOWHF which is also a provably secure higher order UOWHF under the same well-known cryptographic assumption, whereas this assumption does not seem sufficient to prove its collision-resistance. A consequence of our result is that one can apply the Merkle-Damgård extender to the subset sum compression function with ‘extension factor’ k+1, while losing (at most) about k bits of UOWHF security relative to the UOWHF security of the compression function. The method also leads to a saving of up to m log(k+1) bits in key length relative to the Shoup XOR-Mask domain extender applied to the subset sum compression function.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aspects of Keno modelling throughout the Australian states of Queensland, New South Wales and Victoria are discussed: the trivial Heads or Tails and the more interesting Keno Bonus, which leads to consideration of the subset sum problem. The most intricate structure is where Heads or Tails and Keno Bonus are combined, and here, the issue of independence arises. Closed expressions for expected return to player are presented in each case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose of this paper This research aims to examine the effects of inadequate documentation to the cost management & tendering processes in Managing Contractor Contracts using Fixed Lump Sum as a benchmark. Design/methodology/approach A questionnaire survey was conducted with industry practitioners to solicit their views on documentation quality issues associated with the construction industry. This is followed by a series of semi-structured interviews with a purpose of validating survey findings. Findings and value The results showed that documentation quality remains a significant issue, contributing to the industries inefficiency and poor reputation. The level of satisfaction for individual attributes of documentation quality varies. Attributes that do appear to be affected by the choice of procurement method include coordination, build ability, efficiency, completeness and delivery time. Similarly the use and effectiveness of risk mitigation techniques appears to vary between the methods, based on a number of factors such as documentation completeness, early involvement, fast tracking etc. Originality/value of paper This research fills the gap of existing body of knowledge in terms of limited studies on the choice of a project procurement system has an influence on the documentation quality and the level of impact. Conclusions Ultimately research concludes that the entire project team including the client and designers should carefully consider the individual projects requirements and compare those to the trade-offs associated with documentation quality and the procurement method. While documentation quality is definitely an issue to be improved upon, by identifying the projects performance requirements a procurement method can be chosen to maximise the likelihood that those requirements will be met. This allows the aspects of documentation quality considered most important to the individual project to be managed appropriately.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The magnetic moment of the Λ hyperon is calculated using the QCD sum-rule approach of Ioffe and Smilga. It is shown that μΛ has the structure μΛ=(2/3(eu+ed+4es)(eħ/2MΛc)(1+δΛ), where δΛ is small. In deriving the sum rules special attention is paid to the strange-quark mass-dependent terms and to several additional terms not considered in earlier works. These terms are now appropriately incorporated. The sum rule is analyzed using the ratio method. Using the external-field-induced susceptibilities determined earlier, we find that the calculated value of μΛ is in agreement with experiment.