686 resultados para Processors
Resumo:
The paper provides a description of a methodology used for quantitative assessment of post harvest losses in the Kainji Lake Fishery (Nigeria). The sample population was made up of 314 fisherfolk, 115 processors, 125 fish buyers and 111 fish sellers. For the determination of handling losses, 24,839 fishes weighing 2,389.31 kg belonging to 43 species were examined of which 10% by number and 9% by weight deteriorated at checking and 4% by number and 3% by weight at landing. Processing losses recorded 22% by number and 16% by weight deteriorated prior to and during smoking with the traditional 'Banda' kiln. During marketing, 16% of fish sold had deteriorated and 6% by weight of fish bought also deteriorated, mainly due to insect infestation during storage. Based on the 1995 yield estimate for Kainji Lake fishery, approximately 1000 tons of fish estimated at 80 million Naira were lost during handling alone. This figure would be much higher if the level of losses during processing and marketing are included. This assessment technique is recommended for use in obtaining quantifiable data on post harvest losses from other water bodies in Nigeria
Resumo:
Singular Value Decomposition (SVD) is a key linear algebraic operation in many scientific and engineering applications. In particular, many computational intelligence systems rely on machine learning methods involving high dimensionality datasets that have to be fast processed for real-time adaptability. In this paper we describe a practical FPGA (Field Programmable Gate Array) implementation of a SVD processor for accelerating the solution of large LSE problems. The design approach has been comprehensive, from the algorithmic refinement to the numerical analysis to the customization for an efficient hardware realization. The processing scheme rests on an adaptive vector rotation evaluator for error regularization that enhances convergence speed with no penalty on the solution accuracy. The proposed architecture, which follows a data transfer scheme, is scalable and based on the interconnection of simple rotations units, which allows for a trade-off between occupied area and processing acceleration in the final implementation. This permits the SVD processor to be implemented both on low-cost and highend FPGAs, according to the final application requirements.
Resumo:
An assessment is given of the post-harvest losses in the Lake Kainji fisheries of Nigeria. The study focussed on quantifiable information on post-harvest technology and post-harvest losses from fisherfolk, fish processors and fish traders operating within the Kainji Lake basin. The information was obtained from questionnaires sent to a total of 665 respondents, comprising 317 fishermen, 115 fish processors, 125 fish buyers, and 111 fish sellers in 45 fishing villages and collection centres within the lake basin. Considering the total catch from gillnets, longlines, traps and cast nets estimated at 14,000 in 1995 about 1,000 t of fish was either discarded or lost value due to spoilage during handling by fisherfolk. Assuming an average prices of 80 Naira/kg of fish, the loss to the economy amounted to 80 million Naira annually. Appropriate recommendations are made to significantly reduce post-harvest losses in the Kainji Lake fishery. (PDF contains 91 pages)
Resumo:
With the size of transistors approaching the sub-nanometer scale and Si-based photonics pinned at the micrometer scale due to the diffraction limit of light, we are unable to easily integrate the high transfer speeds of this comparably bulky technology with the increasingly smaller architecture of state-of-the-art processors. However, we find that we can bridge the gap between these two technologies by directly coupling electrons to photons through the use of dispersive metals in optics. Doing so allows us to access the surface electromagnetic wave excitations that arise at a metal/dielectric interface, a feature which both confines and enhances light in subwavelength dimensions - two promising characteristics for the development of integrated chip technology. This platform is known as plasmonics, and it allows us to design a broad range of complex metal/dielectric systems, all having different nanophotonic responses, but all originating from our ability to engineer the system surface plasmon resonances and interactions. In this thesis, we demonstrate how plasmonics can be used to develop coupled metal-dielectric systems to function as tunable plasmonic hole array color filters for CMOS image sensing, visible metamaterials composed of coupled negative-index plasmonic coaxial waveguides, and programmable plasmonic waveguide network systems to serve as color routers and logic devices at telecommunication wavelengths.
Resumo:
Several technologies have been disseminated to the rural populace of which few were adoptable while others were above their reaches. The adoption-rejection behaviour of the people of Monai community formed the basis for this study. Twenty women fish processors were interviewed using standard questionnaire. The study discovered that the women fish processors used the new introduced Better Life Programme Banda in rotation for even coverage. The use of new Banda has a positive effect on the quantity of fish processed and income. Additional two basins were processed on weekly basis with an income of N1,000.00 over the previous processing technology. No women processor however had the new Banda on individual basis for cost. Hence rural people adopt a technology that is cheap and simple but conforming to their traditional practices and values. Therefore any innovation intended for the rural populace should take cognizance of their socio-economic and cultural factors
Resumo:
Fisheries is important to Nigeria agricultural economy because it provides employment for fisherfolks (men and women fishers, fishmongers (fish traders), fish processors and fish farmers. It also supplies protein to the diet of Nigerians and it is equally a viable source of foreign exchange earning to the government.The estimated Nigeria population of 120 million consumes about 1.2million metric tones of fish and fish products annual. This justified the important role fisheries could play in nigerian diet considering that Nigeria has vast inland waters that cover an estimated total surface area of 199,580km super(2) and equally vast sea area of 25,000km super(2). In these waters the author claimed that there are diverse fish resources that are of economic importance in both inland and seawaters. FDF (2000) also estimated that the current annual yield of both inland and seawater put together is about 418,069,3 metric tones from artisanal fisheries and 23,720 metric tones from aquaculture. The shortage between the annual consumption level of 1.2million metric tones and annual yield of 418,069,3 metric tones is made available through importation. It is therefore of concern that given the level of current fish yield from the various fisheries resources the demand still exceeds supply. One wonders whether the production inadequacy is due to poor management of available fisheries resources or that improved fisheries technology that could aid increased production was not efficiently transferred to fish farmers. To answer these questions one need to examine the past and present extension policy in Nigeria as they affect dissemination of fisheries technologies
Resumo:
The fishery of Lake Victoria became a major commercial fishery with the introduction of Nile perch in 1950s and 1960s. Biological and population characteristics point to a fishery under intense fishing pressure attributed to increased capacity and use of illegal fishing gears. Studies conducted between 1998 to 2000 suggested capture of fish between slot size of 50 to 85 cm TL to sustain the fishery. Samples from Kenya and Uganda factories in 2008 showed that 50% and 71% of individuals processed were below the slot size respectively. This study revealed that fish below and above the slot has continued being caught and processed. This confirms that the slot size is hardly adhered to by both the fishers and the processors. The paper explores why the slot size has not been a successful tool in management of Nile perch and suggests strategies to sustain the fishery
Resumo:
A neural network is a highly interconnected set of simple processors. The many connections allow information to travel rapidly through the network, and due to their simplicity, many processors in one network are feasible. Together these properties imply that we can build efficient massively parallel machines using neural networks. The primary problem is how do we specify the interconnections in a neural network. The various approaches developed so far such as outer product, learning algorithm, or energy function suffer from the following deficiencies: long training/ specification times; not guaranteed to work on all inputs; requires full connectivity.
Alternatively we discuss methods of using the topology and constraints of the problems themselves to design the topology and connections of the neural solution. We define several useful circuits-generalizations of the Winner-Take-All circuitthat allows us to incorporate constraints using feedback in a controlled manner. These circuits are proven to be stable, and to only converge on valid states. We use the Hopfield electronic model since this is close to an actual implementation. We also discuss methods for incorporating these circuits into larger systems, neural and nonneural. By exploiting regularities in our definition, we can construct efficient networks. To demonstrate the methods, we look to three problems from communications. We first discuss two applications to problems from circuit switching; finding routes in large multistage switches, and the call rearrangement problem. These show both, how we can use many neurons to build massively parallel machines, and how the Winner-Take-All circuits can simplify our designs.
Next we develop a solution to the contention arbitration problem of high-speed packet switches. We define a useful class of switching networks and then design a neural network to solve the contention arbitration problem for this class. Various aspects of the neural network/switch system are analyzed to measure the queueing performance of this method. Using the basic design, a feasible architecture for a large (1024-input) ATM packet switch is presented. Using the massive parallelism of neural networks, we can consider algorithms that were previously computationally unattainable. These now viable algorithms lead us to new perspectives on switch design.
Resumo:
With continuing advances in CMOS technology, feature sizes of modern Silicon chip-sets have gone down drastically over the past decade. In addition to desktops and laptop processors, a vast majority of these chips are also being deployed in mobile communication devices like smart-phones and tablets, where multiple radio-frequency integrated circuits (RFICs) must be integrated into one device to cater to a wide variety of applications such as Wi-Fi, Bluetooth, NFC, wireless charging, etc. While a small feature size enables higher integration levels leading to billions of transistors co-existing on a single chip, it also makes these Silicon ICs more susceptible to variations. A part of these variations can be attributed to the manufacturing process itself, particularly due to the stringent dimensional tolerances associated with the lithographic steps in modern processes. Additionally, RF or millimeter-wave communication chip-sets are subject to another type of variation caused by dynamic changes in the operating environment. Another bottleneck in the development of high performance RF/mm-wave Silicon ICs is the lack of accurate analog/high-frequency models in nanometer CMOS processes. This can be primarily attributed to the fact that most cutting edge processes are geared towards digital system implementation and as such there is little model-to-hardware correlation at RF frequencies.
All these issues have significantly degraded yield of high performance mm-wave and RF CMOS systems which often require multiple trial-and-error based Silicon validations, thereby incurring additional production costs. This dissertation proposes a low overhead technique which attempts to counter the detrimental effects of these variations, thereby improving both performance and yield of chips post fabrication in a systematic way. The key idea behind this approach is to dynamically sense the performance of the system, identify when a problem has occurred, and then actuate it back to its desired performance level through an intelligent on-chip optimization algorithm. We term this technique as self-healing drawing inspiration from nature's own way of healing the body against adverse environmental effects. To effectively demonstrate the efficacy of self-healing in CMOS systems, several representative examples are designed, fabricated, and measured against a variety of operating conditions.
We demonstrate a high-power mm-wave segmented power mixer array based transmitter architecture that is capable of generating high-speed and non-constant envelope modulations at higher efficiencies compared to existing conventional designs. We then incorporate several sensors and actuators into the design and demonstrate closed-loop healing against a wide variety of non-ideal operating conditions. We also demonstrate fully-integrated self-healing in the context of another mm-wave power amplifier, where measurements were performed across several chips, showing significant improvements in performance as well as reduced variability in the presence of process variations and load impedance mismatch, as well as catastrophic transistor failure. Finally, on the receiver side, a closed-loop self-healing phase synthesis scheme is demonstrated in conjunction with a wide-band voltage controlled oscillator to generate phase shifter local oscillator (LO) signals for a phased array receiver. The system is shown to heal against non-idealities in the LO signal generation and distribution, significantly reducing phase errors across a wide range of frequencies.
Resumo:
Co-management is typically known to be a resource management system that shares managerial responsibility between the state and other stakeholders of a resource. In the case of Lake Victoria, one would expect the state to be represented by the fisheries departments of Kenya, Uganda and Tanzania, while stakeholder groups may comprise fishing communities, fish processing factories and municipalities. Taking that into account, the survey's objectives were defined as: (a) To identify the difficulties and impracticalities inherent in implementing state-based regulations via a "top-down" management strategy. (b) To assess the prevalence of community-based institutions that either seek to regulate the fishery or have the potential to be used to regulate it. (c) To identify ways in which community-based regulatory and monitory systems may be established, and how these will fare over time. (d) To identify roles for national Fisheries Departments, industrial fish processors and other stakeholders. (e) To develop well-founded policy suggestions for the establishment of a co-management framework to manage the fisheries of Lake Victoria.
Resumo:
The findings are presented of a marketing survey conducted in the lake Victoria region. The research concentrated on consumers, trader /processors serving local markets, industrial processors serving mainly international markets, and fishers. The market for fish from Lake Victoria is traced from the consumer to the producer, including as many components of the chain as possible. The components are dealt with in individual sections which comprise a profile of a typical consumer/trader-processor/industrial processor /fisher, a list of survey sites, a map showing locations, a note on potential biases within the individual survey, a list of hypotheses or study topics for all surveys except for that of industrial processors, detailed analyses and also the pertinent questionnaire.
Resumo:
Determination of the energy range is an important precondition of focus calibration using alignment procedure (FOCAL) test. A new method to determine the energy range of FOCAL off-lined is presented in this paper. Independent of the lithographic tool, the method is time-saving and effective. The influences of some process factors, e.g. resist thickness, post exposure bake (PEB) temperature, PEB time and development time, on the energy range of FOCAL are analyzed.
Resumo:
150 p.
Resumo:
142 p.
Resumo:
A demanda crescente por poder computacional estimulou a pesquisa e desenvolvimento de processadores digitais cada vez mais densos em termos de transistores e com clock mais rápido, porém não podendo desconsiderar aspectos limitantes como consumo, dissipação de calor, complexidade fabril e valor comercial. Em outra linha de tratamento da informação, está a computação quântica, que tem como repositório elementar de armazenamento a versão quântica do bit, o q-bit ou quantum bit, guardando a superposição de dois estados, diferentemente do bit clássico, o qual registra apenas um dos estados. Simuladores quânticos, executáveis em computadores convencionais, possibilitam a execução de algoritmos quânticos mas, devido ao fato de serem produtos de software, estão sujeitos à redução de desempenho em razão do modelo computacional e limitações de memória. Esta Dissertação trata de uma versão implementável em hardware de um coprocessador para simulação de operações quânticas, utilizando uma arquitetura dedicada à aplicação, com possibilidade de explorar o paralelismo por replicação de componentes e pipeline. A arquitetura inclui uma memória de estado quântico, na qual são armazenados os estados individuais e grupais dos q-bits; uma memória de rascunho, onde serão armazenados os operadores quânticos para dois ou mais q-bits construídos em tempo de execução; uma unidade de cálculo, responsável pela execução de produtos de números complexos, base dos produtos tensoriais e matriciais necessários à execução das operações quânticas; uma unidade de medição, necessária à determinação do estado quântico da máquina; e, uma unidade de controle, que permite controlar a operação correta dos componente da via de dados, utilizando um microprograma e alguns outros componentes auxiliares.