842 resultados para Input-output model


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Performance of space-time block codes can be improved using the coordinate interleaving of the input symbols from rotated M-ary phase shift keying (MPSK) and M-ary quadrature amplitude modulation (MQAM) constellations. This paper is on the performance analysis of coordinate-interleaved space-time codes, which are a subset of single-symbol maximum likelihood decodable linear space-time block codes, for wireless multiple antenna terminals. The analytical and simulation results show that full diversity is achievable. Using the equivalent single-input single-output model, simple expressions for the average bit error rates are derived over flat uncorrelated Rayleigh fading channels. Optimum rotation angles are found by finding the minimum of the average bit error rate curves.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Near-infrared diffuse optical tomography (DOT) technique has the capability of providing good quantitative reconstruction of tissue absorption and scattering properties with additional inputs such as input and output modulation depths and correction for the photon leakage. We have calculated the two-dimensional (2D) input modulation depth from three-dimensional (3D) diffusion to model the 2D diffusion of photons. The photon leakage when light traverses from phantom to the fiber tip is estimated using a solid angle model. The experiments are carried for single (5 and 6 mm) as well as multiple inhomogeneities (6 and 8 mm) with higher absorption coefficient in a homogeneous phantom. Diffusion equation for photon transport is solved using finite element method and Jacobian is modeled for reconstructing the optical parameters. We study the development and performance of DOT system using modulated single light source and multiple detectors. The dual source methods are reported to have better reconstruction capabilities to resolve and localize single as well as multiple inhomogeneities because of its superior noise rejection capability. However, an experimental setup with dual sources is much more difficult to implement because of adjustment of two out of phase identical light probes symmetrically on either side of the detector during scanning time. Our work shows that with a relatively simpler system with a single source, the results are better in terms of resolution and localization. The experiments are carried out with 5 and 6 mm inhomogeneities separately and 6 and 8 mm inhomogeneities both together with absorption coefficient almost three times as that of the background. The results show that our experimental single source system with additional inputs such as 2D input/output modulation depth and air fiber interface correction is capable of detecting 5 and 6 mm inhomogeneities separately and can identify the size difference of multiple inhomogeneities such as 6 and 8 mm. The localization error is zero. The recovered absorption coefficient is 93% of inhomogeneity that we have embedded in experimental phantom.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the field of mechanics, it is a long standing goal to measure quantum behavior in ever larger and more massive objects. It may now seem like an obvious conclusion, but until recently it was not clear whether a macroscopic mechanical resonator -- built up from nearly 1013 atoms -- could be fully described as an ideal quantum harmonic oscillator. With recent advances in the fields of opto- and electro-mechanics, such systems offer a unique advantage in probing the quantum noise properties of macroscopic electrical and mechanical devices, properties that ultimately stem from Heisenberg's uncertainty relations. Given the rapid progress in device capabilities, landmark results of quantum optics are now being extended into the regime of macroscopic mechanics.

The purpose of this dissertation is to describe three experiments -- motional sideband asymmetry, back-action evasion (BAE) detection, and mechanical squeezing -- that are directly related to the topic of measuring quantum noise with mechanical detection. These measurements all share three pertinent features: they explore quantum noise properties in a macroscopic electromechanical device driven by a minimum of two microwave drive tones, hence the title of this work: "Quantum electromechanics with two tone drive".

In the following, we will first introduce a quantum input-output framework that we use to model the electromechanical interaction and capture subtleties related to interpreting different microwave noise detection techniques. Next, we will discuss the fabrication and measurement details that we use to cool and probe these devices with coherent and incoherent microwave drive signals. Having developed our tools for signal modeling and detection, we explore the three-wave mixing interaction between the microwave and mechanical modes, whereby mechanical motion generates motional sidebands corresponding to up-down frequency conversions of microwave photons. Because of quantum vacuum noise, the rates of these processes are expected to be unequal. We will discuss the measurement and interpretation of this asymmetric motional noise in a electromechanical device cooled near the ground state of motion.

Next, we consider an overlapped two tone pump configuration that produces a time-modulated electromechanical interaction. By careful control of this drive field, we report a quantum non-demolition (QND) measurement of a single motional quadrature. Incorporating a second pair of drive tones, we directly measure the measurement back-action associated with both classical and quantum noise of the microwave cavity. Lastly, we slightly modify our drive scheme to generate quantum squeezing in a macroscopic mechanical resonator. Here, we will focus on data analysis techniques that we use to estimate the quadrature occupations. We incorporate Bayesian spectrum fitting and parameter estimation that serve as powerful tools for incorporating many known sources of measurement and fit error that are unavoidable in such work.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An analysis of the factor-product relationship in the semi-intensive shrimp farming system of Kerala, farm basis and hectare basis, we are attempted and the results reported in this paper. The Cobb-Douglas model, in which the physical relationship between input and output is estimated, and the marginal analysis then employed to evaluate the producer behaviour, was used for the analysis. The study was based on empirical data collected during November 1994 to May 1996, covering three seasons, from 21 farms spread over Alappuzha, Ernakulam and Kasaragod districts of the state. The sample covered a total area of 61.06 ha. Of the 11 explanatory variables considered in the model, the size of the farm, casual labour and chemical fertilizers were found statistically significant. It was also observed that the factors such as age of pond, experience of the farmer, feed, miscellaneous costs, number of seed stocked and skilled labour contributed positively to the output. The estimated industry production function exhibited unitary economies of scale. The estimated mean output was 3937 kg/ha. The test of multi-co-linearity showed that there is no problem of dominant variable. On the basis of the marginal product and the given input-output prices, the optimum amounts of seed, feed and casual labour were estimated. They were about 97139 seed, 959 kg of feed and 585 man-days of casual labour per farm. This indicated the need for reducing the stocking density and amount of feed from the present levels, in order to maximise profit. Based on the finding of the study, suggestions for improving the industry production function are proposed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This article presents a framework that describes formally the underlying unsteady and conjugate heat transfer processes that are undergone in thermodynamic systems, along with results from its application to the characterization of thermodynamic losses due to irreversible heat transfer during reciprocating compression and expansion processes in a gas spring. Specifically, a heat transfer model is proposed that solves the one-dimensional unsteady heat conduction equation in the solid simultaneously with the first law in the gas phase, with an imposed heat transfer coefficient taken from suitable experiments in gas springs. Even at low volumetric compression ratios (of 2.5), notable effects of unsteady heat transfer to the solid walls are revealed, with thermally induced thermodynamic cycle (work) losses of up to 14% (relative to the work input/output in equivalent adiabatic and reversible compression/expansion processes) at intermediate Péclet numbers (i.e., normalized frequencies) when unfavorable solid and gas materials are selected, and closer to 10-12% for more common material choices. The contribution of the solid toward these values, through the conjugate variations attributed to the thickness of the cylinder wall, is about 8% and 2% points, respectively, showing a maximum at intermediate thicknesses. At higher compression ratios (of 6) a 19% worst-case loss is reported for common materials. These results suggest strongly that in designing high-efficiency reciprocating machines the full conjugate and unsteady problem must be considered and that the role of the solid in determining performance cannot, in general, be neglected. © 2014 Richard Mathie, Christos N. Markides, and Alexander J. White. Published with License by Taylor & Francis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

本文主要研究基于跟随领航者法的多 UUV(unmanned underwater vehicle)队形控制。在 UUV 载体坐标系下建立系统的运动学模型,该模型是对笛卡尔坐标系下的运动学模型的改进,避免了极坐标系下奇异点的出现。该模型经过输入输出反馈线性化,获得稳定的队形控制器。同时,为了缩小队形控制律中的控制参数的调整范围,本文提出了辅助算法,在此基础上分析参数的有效范围。将队形控制律在多 UUV 数字仿真平台上验证,证实了改进的运动学模型和控制律的有效性。

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Based on a viewpoint of an intricate system demanding high, this thesis advances a new concept that urban sustainable development stratagem is a high harmony and amalgamation among urban economy, geo-environment and tech-capital, and the optimum field of which lies in their mutual matching part, which quantitatively demarcates the optimum value field of urban sustainable development and establishes the academic foundation to describe and analyze sustainable development stratagem. And establishes a series of cause-effect model, a analysissitus model, flux model as well as its recognizing mode for urban system are established by the approach of System Dynamics, which can distinguish urban states by its polarity of entropy flows. At the same time, the matter flow, energy flow and information flow which exist in the course of urban development are analyzed based on the input/output (I/O) relationships of urban economy. And a new type of I/O relationships, namely new resources-environment account, are established, in which both resource and environment factors are considered. All above that settles a theoretic foundation for resource economy and environment economy as well as quantitative relationships of inter-function between urban development and geoenvironment, and gives a new approach to analyze natinal economy and urban sustainable development. Based on an analysis of the connection between resource-environmental construct of geoenvironment and urban economy development, the Geoenvironmental Carrying Capability (GeCC) is analyzed. Further more, a series of definitions and formula about the Gross Carrying Capability (GCC), Structure Carrying Capability (SCC) and Impulse Carrying Capability (ICC) is achieved, which can be applied to evaluate both the quality and capacity of geoenvironment and thereunder to determine the scale and velocity for urban development. A demonstrative study of the above is applied to Beihai city (Guangxi province, PRC), and the numerical value laws between the urban development and its geoenvironment is studied by the I/O relationship in the urban economy as following: · the relationships between the urban economic development and land use as well as consumption of underground water, metal mineral, mineral energy source, metalloid mineral and other geologic resources. · the relationships between urban economy and waste output such as industrial "3 waste", dust, rubbish and living polluted water as well as the restricting impact of both resource-environmental factors and tech-capital on the urban grow. · Optimization and control analysis on the reciprocity between urban economy and its geoenvironment are discussed, and sensitive factors and its order of the urban geoenvironmental resources, wastes and economic sections are fixed, which can be applied to determine the urban industrial structure, scale, grow rate matching with its geoenvironment and tech-capital. · a sustainable development stratagem for the city is suggested.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

I wish to propose a quite speculative new version of the grandmother cell theory to explain how the brain, or parts of it, may work. In particular, I discuss how the visual system may learn to recognize 3D objects. The model would apply directly to the cortical cells involved in visual face recognition. I will also outline the relation of our theory to existing models of the cerebellum and of motor control. Specific biophysical mechanisms can be readily suggested as part of a basic type of neural circuitry that can learn to approximate multidimensional input-output mappings from sets of examples and that is expected to be replicated in different regions of the brain and across modalities. The main points of the theory are: -the brain uses modules for multivariate function approximation as basic components of several of its information processing subsystems. -these modules are realized as HyperBF networks (Poggio and Girosi, 1990a,b). -HyperBF networks can be implemented in terms of biologically plausible mechanisms and circuitry. The theory predicts a specific type of population coding that represents an extension of schemes such as look-up tables. I will conclude with some speculations about the trade-off between memory and computation and the evolution of intelligence.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

M. H. Lee, and S. M. Garrett, Qualitative modelling of unknown interface behaviour, International Journal of Human Computer Studies, Vol. 53, No. 4, pp. 493-515, 2000

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

eScience is an umbrella concept which covers internet technologies, such as web service orchestration that involves manipulation and processing of high volumes of data, using simple and efficient methodologies. This concept is normally associated with bioinformatics, but nothing prevents the use of an identical approach for geoinfomatics and OGC (Open Geospatial Consortium) web services like WPS (Web Processing Service). In this paper we present an extended WPS implementation based on the PyWPS framework using an automatically generated WSDL (Web Service Description Language) XML document that replicates the WPS input/output document structure used during an Execute request to a server. Services are accessed using a modified SOAP (Simple Object Access Protocol) interface provided by PyWPS, that uses service and input/outputs identifiers as element names. The WSDL XML document is dynamically generated by applying XSLT (Extensible Stylesheet Language Transformation) to the getCapabilities XML document that is generated by PyWPS. The availability of the SOAP interface and WSDL description allows WPS instances to be accessible to workflow development software like Taverna, enabling users to build complex workflows using web services represented by interconnecting graphics. Taverna will transform the visual representation of the workflow into a SCUFL (Simple Conceptual Unified Flow Language) based XML document that can be run internally or sent to a Taverna orchestration server. SCUFL uses a dataflow-centric orchestration model as opposed to the more commonly used orchestration language BPEL (Business Process Execution Language) which is process-centric.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper deals with Takagi-Sugeno (TS) fuzzy model identification of nonlinear systems using fuzzy clustering. In particular, an extended fuzzy Gustafson-Kessel (EGK) clustering algorithm, using robust competitive agglomeration (RCA), is developed for automatically constructing a TS fuzzy model from system input-output data. The EGK algorithm can automatically determine the 'optimal' number of clusters from the training data set. It is shown that the EGK approach is relatively insensitive to initialization and is less susceptible to local minima, a benefit derived from its agglomerate property. This issue is often overlooked in the current literature on nonlinear identification using conventional fuzzy clustering. Furthermore, the robust statistical concepts underlying the EGK algorithm help to alleviate the difficulty of cluster identification in the construction of a TS fuzzy model from noisy training data. A new hybrid identification strategy is then formulated, which combines the EGK algorithm with a locally weighted, least-squares method for the estimation of local sub-model parameters. The efficacy of this new approach is demonstrated through function approximation examples and also by application to the identification of an automatic voltage regulation (AVR) loop for a simulated 3 kVA laboratory micro-machine system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Processor architectures has taken a turn towards many-core processors, which integrate multiple processing cores on a single chip to increase overall performance, and there are no signs that this trend will stop in the near future. Many-core processors are harder to program than multi-core and single-core processors due to the need of writing parallel or concurrent programs with high degrees of parallelism. Moreover, many-cores have to operate in a mode of strong scaling because of memory bandwidth constraints. In strong scaling increasingly finer-grain parallelism must be extracted in order to keep all processing cores busy.

Task dataflow programming models have a high potential to simplify parallel program- ming because they alleviate the programmer from identifying precisely all inter-task de- pendences when writing programs. Instead, the task dataflow runtime system detects and enforces inter-task dependences during execution based on the description of memory each task accesses. The runtime constructs a task dataflow graph that captures all tasks and their dependences. Tasks are scheduled to execute in parallel taking into account dependences specified in the task graph.

Several papers report important overheads for task dataflow systems, which severely limits the scalability and usability of such systems. In this paper we study efficient schemes to manage task graphs and analyze their scalability. We assume a programming model that supports input, output and in/out annotations on task arguments, as well as commutative in/out and reductions. We analyze the structure of task graphs and identify versions and generations as key concepts for efficient management of task graphs. Then, we present three schemes to manage task graphs building on graph representations, hypergraphs and lists. We also consider a fourth edge-less scheme that synchronizes tasks using integers. Analysis using micro-benchmarks shows that the graph representation is not always scalable and that the edge-less scheme introduces least overhead in nearly all situations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Since the first launch of the new engineering contract (NEC) in 1993, early warning of problems has been widely recognized as an important approach of proactive management during a construction or engineering project. Is early warning really effective for the improvement of problem solving and project performance? This is a research question that still lacks a good answer. For this reason, an empirical investigation was made in the United Kingdom (U.K.) to answer the question. This study adopts a combination of literature review, expert interview, and questionnaire survey. Nearly 100 questionnaire responses were collected from the U.K. construction industry, based on which the use of early warning under different forms of contract is compared in this paper. Problem solving and project performance are further compared between the projects using early warning and the projects not using early warning. The comparison provides clear evidence for the significant effect of early warning on problem solving and project performance in terms of time, cost, and quality. Subsequently, an input-process-output model is developed in this paper to explore the relationship among early warning, problem solving, and project
performance. All these help construction researchers and practitioners to better understand the role of early warning in ensuring project success.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A integridade do sinal em sistemas digitais interligados de alta velocidade, e avaliada através da simulação de modelos físicos (de nível de transístor) é custosa de ponto vista computacional (por exemplo, em tempo de execução de CPU e armazenamento de memória), e exige a disponibilização de detalhes físicos da estrutura interna do dispositivo. Esse cenário aumenta o interesse pela alternativa de modelação comportamental que descreve as características de operação do equipamento a partir da observação dos sinais eléctrico de entrada/saída (E/S). Os interfaces de E/S em chips de memória, que mais contribuem em carga computacional, desempenham funções complexas e incluem, por isso, um elevado número de pinos. Particularmente, os buffers de saída são obrigados a distorcer os sinais devido à sua dinâmica e não linearidade. Portanto, constituem o ponto crítico nos de circuitos integrados (CI) para a garantia da transmissão confiável em comunicações digitais de alta velocidade. Neste trabalho de doutoramento, os efeitos dinâmicos não-lineares anteriormente negligenciados do buffer de saída são estudados e modulados de forma eficiente para reduzir a complexidade da modelação do tipo caixa-negra paramétrica, melhorando assim o modelo standard IBIS. Isto é conseguido seguindo a abordagem semi-física que combina as características de formulação do modelo caixa-negra, a análise dos sinais eléctricos observados na E/S e propriedades na estrutura física do buffer em condições de operação práticas. Esta abordagem leva a um processo de construção do modelo comportamental fisicamente inspirado que supera os problemas das abordagens anteriores, optimizando os recursos utilizados em diferentes etapas de geração do modelo (ou seja, caracterização, formulação, extracção e implementação) para simular o comportamento dinâmico não-linear do buffer. Em consequência, contributo mais significativo desta tese é o desenvolvimento de um novo modelo comportamental analógico de duas portas adequado à simulação em overclocking que reveste de um particular interesse nas mais recentes usos de interfaces de E/S para memória de elevadas taxas de transmissão. A eficácia e a precisão dos modelos comportamentais desenvolvidos e implementados são qualitativa e quantitativamente avaliados comparando os resultados numéricos de extracção das suas funções e de simulação transitória com o correspondente modelo de referência do estado-da-arte, IBIS.