991 resultados para Fléchier, Esprit, 1632-1710.
Resumo:
This paper presents the design of a novel single chip adaptive beamformer capable of performing 50 Gflops, (Giga-floating-point operations/second). The core processor is a QR array implemented on a fully efficient linear systolic architecture, derived using a mapping that allows individual processors for boundary and internal cell operations. In addition, the paper highlights a number of rapid design techniques that have been used to realise this system. These include an architecture synthesis tool for quickly developing the circuit architecture and the utilisation of a library of parameterisable silicon intellectual property (IP) cores, to rapidly develop detailed silicon designs.
Resumo:
Stand-alone virtual environments (VEs) using haptic devices have proved useful for assembly/disassembly simulation of mechanical components. Nowadays, collaborative haptic virtual environments (CHVEs) are also emerging. A new peer-to-peer collaborative haptic assembly simulator (CHAS) has been developed whereby two users can simultaneously carry out assembly tasks using haptic devices. Two major challenges have been addressed: virtual scene synchronization (consistency) and the provision of a reliable and effective haptic feedback. A consistency-maintenance scheme has been designed to solve the challenge of achieving consistency. Results show that consistency is guaranteed. Furthermore, a force-smoothing algorithm has been developed which is shown to improve the quality of force feedback under adverse network conditions. A range of laboratory experiments and several real trials between Labein (Spain) and Queen’s University Belfast (Northern Ireland) have verified that CHAS can provide an adequate haptic interaction when both users perform remote assemblies (assembly of one user’s object with an object grasped by the other user). Moreover, when collisions between grasped objects occur (dependent collisions), the haptic feedback usually provides satisfactory haptic perception. Based on a qualitative study, it is shown that the haptic feedback obtained during remote assemblies with dependent collisions can continue to improve the sense of co-presence between users with regard to only visual feedback.
Resumo:
Although e-commerce adoption and customers initial purchasing behavior have been well studied in the literature, repeat purchase intention and its antecedents remain understudied. This study proposes a model to understand the extent to which trust mediates the effects of vendor-specific factors on customers intention to repurchase from an online vendor. The model was tested and validated in two different country settings. We found that trust fully mediates the relationships between perceived reputation, perceived capability of order fulfillment, and repurchasing intention, and partially mediates the relationship between perceived website quality and repurchasing intention in both countries. Moreover, multi-group analysis reveals no significant between-country differences of the model with regards to the antecedents and outcomes of trust, except the effect of reputation on trust. Academic and practical implications and future research are discussed. © 2009 Operational Research Society Ltd.
Resumo:
The decision of the U.S. Supreme Court in 1991 in Feist Publications, Inc. v. Rural Tel. Service Co. affirmed originality as a constitutional requirement for copyright. Originality has a specific sense and is constituted by a minimal degree of creativity and independent creation. The not original is the more developed concept within the decision. It includes the absence of a minimal degree of creativity as a major constituent. Different levels of absence of creativity also are distinguished, from the extreme absence of creativity to insufficient creativity. There is a gestalt effect of analogy between the delineation of the not original and the concept of computability. More specific correlations can be found within the extreme absence of creativity. "[S]o mechanical" in the decision can be correlated with an automatic mechanical procedure and clauses with a historical resonance with understandings of computability as what would naturally be regarded as computable. The routine within the extreme absence of creativity can be regarded as the product of a computational process. The concern of this article is with rigorously establishing an understanding of the extreme absence of creativity, primarily through the correlations with aspects of computability. The understanding established is consistent with the other elements of the not original. It also revealed as testable under real-world conditions. The possibilities for understanding insufficient creativity, a minimal degree of creativity, and originality, from the understanding developed of the extreme absence of creativity, are indicated.
Resumo:
The mechanism of energy converting NADH:ubiquinone oxidoreductase (complex 1) is Still unknown. A current controversy centers around the question whether electron transport of complex I is always linked to vectorial proton translocation or whether in some organisms the enzyme pumps sodium ions instead. To develop better experimental tools to elucidate its mechanism, we have reconstituted the affinity purified enzyme into proteoliposomes and monitored the generation of Delta pH and Delta psi. We tested several detergents to solubilize the asolectin used for liposome formation. Tightly coupled proteoliposomes containing highly active complex I were obtained by detergent removal with BioBeads after total solubilization or the phospholipids with n-octyl-beta-D-glucopyranoside. We have used dyes to monitor the formation of the two components of the proton motive force, Delta pH and Delta psi, across the liposomal membrane, and analyzed the effects of inhibitors, uncouplers and ionophores on this process. We show that electron transfer of complex I of the lower eukaryote Y. lipolytica is clearly linked to proton translocation. While this study was not specifically designed to demonstrate possible additional sodium translocating properties of complex 1, we did not find indications for primary or secondary Na+ translocation by Y lipolytica complex I. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
La3FMo4O16 crystallizes in the triclinic crystal system with space group P (1) over bar [a = 724.86(2) pm, b = 742.26(2) pm, c = 1469.59(3) pm, a = 101.683(2)degrees, beta 102.118(2)degrees, gamma = 100.279(2)degrees] with two formula units per unit cell. The three crystallographically independent La3+ cations show a coordination number of nine each, with one F- and eight O2- anions forming distorted monocapped square antiprisms. The fluoride anion is coordinated by all three lanthanum cations to form a nearly planar triangle. Besides three crystallographically independent tetrahedral [MoO4](2-) units, a fourth one with a higher coordination number (CN = 4 +1) can be found in the crystal structure, forming a dimeric entity with a formula of [Mo2O8](4-) consisting of two edge-connected square pyramids. Several spectroscopic measurements were performed on the title compound, such as infrared, Raman, and diffuse reflectance spectroscopy. Furthermore, La3FMo4O16 was investigated for its capacity to work as host material for doping with luminescent active cations, such as Ce3+ or Pr3+. Therefore, luminescence spectroscopic as well as EPR measurements were performed with doped samples of the title compound. Both the pure and the doped compounds can be synthesized by fusing La2O3, LaF3 and MoO3 (ratio 4:1:12; ca. 1 % CeF3 and PrF3 as dopant, respectively) in evacuated silica ampoules at 850 degrees C for 7 d.
Resumo:
Hunter and Konieczny explored the relationships between measures of inconsistency for a belief base and the minimal inconsistent subsets of that belief base in several of their papers. In particular, an inconsistency value termed MIVC, defined from minimal inconsistent subsets, can be considered as a Shapley Inconsistency Value. Moreover, it can be axiomatized completely in terms of five simple axioms. MinInc, one of the five axioms, states that each minimal inconsistent set has the same amount of conflict. However, it conflicts with the intuition illustrated by the lottery paradox, which states that as the size of a minimal inconsistent belief base increases, the degree of inconsistency of that belief base becomes smaller. To address this, we present two kinds of revised inconsistency measures for a belief base from its minimal inconsistent subsets. Each of these measures considers the size of each minimal inconsistent subset as well as the number of minimal inconsistent subsets of a belief base. More specifically, we first present a vectorial measure to capture the inconsistency for a belief base, which is more discriminative than MIVC. Then we present a family of weighted inconsistency measures based on the vectorial inconsistency measure, which allow us to capture the inconsistency for a belief base in terms of a single numerical value as usual. We also show that each of the two kinds of revised inconsistency measures can be considered as a particular Shapley Inconsistency Value, and can be axiomatically characterized by the corresponding revised axioms presented in this paper.
Resumo:
In the last decade, data mining has emerged as one of the most dynamic and lively areas in information technology. Although many algorithms and techniques for data mining have been proposed, they either focus on domain independent techniques or on very specific domain problems. A general requirement in bridging the gap between academia and business is to cater to general domain-related issues surrounding real-life applications, such as constraints, organizational factors, domain expert knowledge, domain adaption, and operational knowledge. Unfortunately, these either have not been addressed, or have not been sufficiently addressed, in current data mining research and development.Domain-Driven Data Mining (D3M) aims to develop general principles, methodologies, and techniques for modeling and merging comprehensive domain-related factors and synthesized ubiquitous intelligence surrounding problem domains with the data mining process, and discovering knowledge to support business decision-making. This paper aims to report original, cutting-edge, and state-of-the-art progress in D3M. It covers theoretical and applied contributions aiming to: 1) propose next-generation data mining frameworks and processes for actionable knowledge discovery, 2) investigate effective (automated, human and machine-centered and/or human-machined-co-operated) principles and approaches for acquiring, representing, modelling, and engaging ubiquitous intelligence in real-world data mining, and 3) develop workable and operational systems balancing technical significance and applications concerns, and converting and delivering actionable knowledge into operational applications rules to seamlessly engage application processes and systems.
Resumo:
Developing a desirable framework for handling inconsistencies in software requirements specifications is a challenging problem. It has been widely recognized that the relative priority of requirements can help developers to make some necessary trade-off decisions for resolving con- flicts. However, for most distributed development such as viewpoints-based approaches, different stakeholders may assign different levels of priority to the same shared requirements statement from their own perspectives. The disagreement in the local levels of priority assigned to the same shared requirements statement often puts developers into a dilemma during the inconsistency handling process. The main contribution of this paper is to present a prioritized merging-based framework for handling inconsistency in distributed software requirements specifications. Given a set of distributed inconsistent requirements collections with the local prioritization, we first construct a requirements specification with a prioritization from an overall perspective. We provide two approaches to constructing a requirements specification with the global prioritization, including a merging-based construction and a priority vector-based construction. Following this, we derive proposals for handling inconsistencies from the globally prioritized requirements specification in terms of prioritized merging. Moreover, from the overall perspective, these proposals may be viewed as the most appropriate to modifying the given inconsistent requirements specification in the sense of the ordering relation over all the consistent subsets of the requirements specification. Finally, we consider applying negotiation-based techniques to viewpoints so as to identify an acceptable common proposal from these proposals.
Resumo:
Hardware synthesis from dataflow graphs of signal processing systems is a growing research area as focus shifts to high level design methodologies. For data intensive systems, dataflow based synthesis can lead to an inefficient usage of memory due to the restrictive nature of synchronous dataflow and its inability to easily model data reuse. This paper explores how dataflow graph changes can be used to drive both the on-chip and off-chip memory organisation and how these memory architectures can be mapped to a hardware implementation. By exploiting the data reuse inherent to many image processing algorithms and by creating memory hierarchies, off-chip memory bandwidth can be reduced by a factor of a thousand from the original dataflow graph level specification of a motion estimation algorithm, with a minimal increase in memory size. This analysis is verified using results gathered from implementation of the motion estimation algorithm on a Xilinx Virtex-4 FPGA, where the delay between the memories and processing elements drops from 14.2 ns down to 1.878 ns through the refinement of the memory architecture. Care must be taken when modeling these algorithms however, as inefficiencies in these models can be easily translated into overuse of hardware resources.
Resumo:
In this paper, a novel video-based multimodal biometric verification scheme using the subspace-based low-level feature fusion of face and speech is developed for specific speaker recognition for perceptual human--computer interaction (HCI). In the proposed scheme, human face is tracked and face pose is estimated to weight the detected facelike regions in successive frames, where ill-posed faces and false-positive detections are assigned with lower credit to enhance the accuracy. In the audio modality, mel-frequency cepstral coefficients are extracted for voice-based biometric verification. In the fusion step, features from both modalities are projected into nonlinear Laplacian Eigenmap subspace for multimodal speaker recognition and combined at low level. The proposed approach is tested on the video database of ten human subjects, and the results show that the proposed scheme can attain better accuracy in comparison with the conventional multimodal fusion using latent semantic analysis as well as the single-modality verifications. The experiment on MATLAB shows the potential of the proposed scheme to attain the real-time performance for perceptual HCI applications.