967 resultados para Look-up table
Resumo:
[EN] The higher education regulation process in Europe, known as the Bologna Process, has involved many changes, mainly in relation to methodology and assessment. The paper given below relates to implementing the new EU study plans into the Teacher Training College of Vitoria-Gasteiz; it is the first interdisciplinary paper written involving teaching staff and related to the Teaching Profession module, the first contained in the structure of the new plans. The coordination of teaching staff is one of the main lines of work in the Bologna Process, which is also essential to develop the right skills and maximise the role of students as an active learning component. The use of active, interdisciplinary methodologies has opened up a new dimension in universities, requiring the elimination of the once componential, individual structure, making us look for new areas of exchange that make it possible for students' training to be developed jointly.
Resumo:
Over 100 molluscan species are landed in Mexico. About 30% are harvested on the Pacific coast and 70% on the Atlantic coast. Clams, scallops, and squid predominate on the Pacific coast (abalone, limpets, and mussels are landed there exclusively). Conchs and oysters predominate on the Atlantic coast. In 1988, some 95,000 metric tons (t) of mollusks were landed, with a value of $33 million. Mollusks were used extensively in prehispanic Mexico as food, tools, and jewelry. Their use as food and jewelry continues. Except in the States of Baja California and Baja California Sur, where abalone, clams, and scallops provide fishermen with year-round employment, mollusk fishing is done part time. On both the Pacific and Atlantic coasts, many fishermen are nomads, harvesting mollusks wherever they find abundant stocks. Upon finding such beds, they build camps, begin harvesting, and continue until the mollusks become so scarce that it no longer pays to continue. They then look for productive beds in other areas and rebuild their camps. Fishermen harvest abalones, mussels, scallops, and clams by free-diving and using scuba and hooka. Landings of clams and cockles have been growing, and 22,000 t were landed in 1988. Fishermen harvest intertidal clams by hand at wading depths, finding them with their feet. In waters up to 5 m, they harvest them by free-diving. In deeper water, they use scuba and hooka. Many species of gastropods have commercial importance on both coasts. All species with a large detachable muscle are sold as scallops. On the Pacific coast, hatchery culture of oysters prevails. Oyster culture in Atlantic coast lagoons began in the 1950's, when beds were enhanced by spreading shells as cultch for spat. (PDF file contains 228 pages.)
Resumo:
This survey was carried out to provide the Kainji Lake Fisheries Promotion Project (KLFPP), whose overall goal is the improvement of the standard of living of fishing communities around Kainji Lake, Nigeria, managing the fisheries on a sustainable basis, with follow-up data for long-term monitoring and evaluation of the overall project goal. A similar survey, conducted in 1996, provided the baseline against which data from the current survey was evaluated. In a cross-sectional survey, anthropometric data was collected from 576 children aged 3-60 months in 282 fisherfolk households around the southern sector of Kainji Lake, Nigeria. In addition, data was collected on the nutritional status and fertility of the mothers, vaccination coverage of children and child survival indicators. For control purposes, 374 children and 181 mothers from non-fishing households around Kainji Lake were likewise covered by the survey. A standardised questionnaire was used to collect relevant data, while anthropometric measurements were made using appropriate equipment. Data compilation and analysis was carried out with a specially designed Microsoft Access application, using NCHS reference data for the analysis of anthropometric measurements. Statistical significance testing was done using EPI-INFO" software. The results of the follow-up survey indicate a slight increase in the percentage of stunted pre-school children in fishing households around Kainji Lake, from 40% in 1996 to 41% in 1999. This increase is however not statistically significant (p= 0.704). Over the same period, the percentage of stunted children in non-fishing households increased from 37% to 39% (p= 0.540), which is also not statistically significant. Likewise, there were no statistically significant differences between the 1996 and 1999 results for the prevalence of either wasted or underweight children in fishing households. The same applies to children from non-fishing households. In addition, vaccination coverage remains very low while infant and child mortality rates continue to be extremely high with about 1 in 5 children dying before their fifth birthday. There has been no perceptible and lasting improvement in the standard of living of fishing households over the course of the second project phase as indicated by the persistently high prevalence of stunting. The situation is the same for the control group, indicating that for the region as a whole, a number of factors beyond the immediate influence of the project continue to negatively impact on the standard of living. The results also show that the project activities have not had any negative long-term effect on the nutritional status of the beneficiaries. (PDF contains 44 pages)
Resumo:
Technology scaling has enabled drastic growth in the computational and storage capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high-bandwidth communication between and within ICs. In this dissertation we focus on low-power solutions that address this demand. We divide communication links into three subcategories depending on the communication distance. Each category has a different set of challenges and requirements and is affected by CMOS technology scaling in a different manner. We start with short-range chip-to-chip links for board-level communication. Next we will discuss board-to-board links, which demand a longer communication range. Finally on-chip links with communication ranges of a few millimeters are discussed.
Electrical signaling is a natural choice for chip-to-chip communication due to efficient integration and low cost. IO data rates have increased to the point where electrical signaling is now limited by the channel bandwidth. In order to achieve multi-Gb/s data rates, complex designs that equalize the channel are necessary. In addition, a high level of parallelism is central to sustaining bandwidth growth. Decision feedback equalization (DFE) is one of the most commonly employed techniques to overcome the limited bandwidth problem of the electrical channels. A linear and low-power summer is the central block of a DFE. Conventional approaches employ current-mode techniques to implement the summer, which require high power consumption. In order to achieve low-power operation we propose performing the summation in the charge domain. This approach enables a low-power and compact realization of the DFE as well as crosstalk cancellation. A prototype receiver was fabricated in 45nm SOI CMOS to validate the functionality of the proposed technique and was tested over channels with different levels of loss and coupling. Measurement results show that the receiver can equalize channels with maximum 21dB loss while consuming about 7.5mW from a 1.2V supply. We also introduce a compact, low-power transmitter employing passive equalization. The efficacy of the proposed technique is demonstrated through implementation of a prototype in 65nm CMOS. The design achieves up to 20Gb/s data rate while consuming less than 10mW.
An alternative to electrical signaling is to employ optical signaling for chip-to-chip interconnections, which offers low channel loss and cross-talk while providing high communication bandwidth. In this work we demonstrate the possibility of building compact and low-power optical receivers. A novel RC front-end is proposed that combines dynamic offset modulation and double-sampling techniques to eliminate the need for a short time constant at the input of the receiver. Unlike conventional designs, this receiver does not require a high-gain stage that runs at the data rate, making it suitable for low-power implementations. In addition, it allows time-division multiplexing to support very high data rates. A prototype was implemented in 65nm CMOS and achieved up to 24Gb/s with less than 0.4pJ/b power efficiency per channel. As the proposed design mainly employs digital blocks, it benefits greatly from technology scaling in terms of power and area saving.
As the technology scales, the number of transistors on the chip grows. This necessitates a corresponding increase in the bandwidth of the on-chip wires. In this dissertation, we take a close look at wire scaling and investigate its effect on wire performance metrics. We explore a novel on-chip communication link based on a double-sampling architecture and dynamic offset modulation technique that enables low power consumption and high data rates while achieving high bandwidth density in 28nm CMOS technology. The functionality of the link is demonstrated using different length minimum-pitch on-chip wires. Measurement results show that the link achieves up to 20Gb/s of data rate (12.5Gb/s/$\mu$m) with better than 136fJ/b of power efficiency.
Resumo:
Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security.
At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level.
In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations.
In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction.
In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.
Resumo:
Aquaculture drive in the Niger Delta has necessitated the springing up of various forms of hatcheries in Nigeria in the area. The hatchery level is high as most fish farmers now want to produce their own fingerlings for the stocking of their production ponds for culture to market (table) size. The paper shows that there is a lot of market in the Niger-Delta for fresh fish. Majority of the numerous fish farmers are not well empowered to breed and produce fish seeds especially species most loved and eaten. The rising cost of materials in the Nigerian economy has become a bottleneck in the construction of more fish hatcheries for fingerling production. However, the assistance of multinationals has become very necessary to enhance its feasibility to encourage better involvement in the fish hatchery works. One remarkable area where assistance is being felt by the communities in the Niger-Delta is in fish farming and more so in the supply of fish fingerling to top fish farmers by The Shell Petroleum Development Company of Nigeria Limited (SPDC), a multinational oil company in the area. Few fish farmers have benefited from this. If more hatcheries are available to service and provide the needed fingerlings to stock the available water bodies such as, home backyard ponds, the 0.74 million hectares of brackish water, 1.01 million hectares of perennial swamps, and other marginal land available for aquaculture and properly managed, it will yield between 2.5 and 10 metric tones of fish depending on the species stocked and bred
Resumo:
An optimal feedback control of broadband frequency up-conversion in BBO crystal is experimentally demonstrated by shaping femto-second laser pulses based on genetic algorithm, and the frequency up-conversion efficiency can be enhanced by similar to 16%. SPIDER results show that the optimal laser pulses have shorter pulse-width with the little negative chirp than the original pulse with the little positive chirp. By modulating the fundamental spectral phase with periodic square distribution on SLM-256, the frequency up-conversion can be effectively controlled by the factor of about 17%. The experimental results indicate that the broadband frequency up-conversion efficiency is related to both of second harmonic generation (SHG) and sum frequency generation (SFG), where the former depends on the fundamental pulse intensity, and the latter depends on not only the fundamental pulse intensity but also the fundamental pulse spectral phase. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
[ES]Las políticas de apoyo al emprendimiento se han demostrado imprescindibles para el desarrollo económico de los países. En este contexto las conocidas incubadoras de empresas juegan un papel importante, pero los innovadores aceleradores de crecimiento que están logrando convertir pequeñas start-up en grandes compañías de base tecnológica se presentan como una apuesta para el futuro. La unión de ambos conceptos constituye un modelo eficaz de apoyo a las start-up tecnológicas. En el trabajo se presentan varios estudios que demuestran que mientras que, en las incubadoras los recursos más valorados son el ahorro de costes y el mentoring, los aceleradores obtienen muy buena calificación en todos sus ámbitos. Aun así la facilidad para obtener financiación y el mentoring también son los aspectos más valorados por los participantes en la aceleración. Además se han utilizado dos casos de éxito con el objeto de proponer un corolario de buenas prácticas que ayude entre otros, a la mejora de la situación de la Comunidad Autónoma del País Vasco (CAPV) en cuanto a emprendimiento se refiere.
Resumo:
The Advanced LIGO and Virgo experiments are poised to detect gravitational waves (GWs) directly for the first time this decade. The ultimate prize will be joint observation of a compact binary merger in both gravitational and electromagnetic channels. However, GW sky locations that are uncertain by hundreds of square degrees will pose a challenge. I describe a real-time detection pipeline and a rapid Bayesian parameter estimation code that will make it possible to search promptly for optical counterparts in Advanced LIGO. Having analyzed a comprehensive population of simulated GW sources, we describe the sky localization accuracy that the GW detector network will achieve as each detector comes online and progresses toward design sensitivity. Next, in preparation for the optical search with the intermediate Palomar Transient Factory (iPTF), we have developed a unique capability to detect optical afterglows of gamma-ray bursts (GRBs) detected by the Fermi Gamma-ray Burst Monitor (GBM). Its comparable error regions offer a close parallel to the Advanced LIGO problem, but Fermi's unique access to MeV-GeV photons and its near all-sky coverage may allow us to look at optical afterglows in a relatively unexplored part of the GRB parameter space. We present the discovery and broadband follow-up observations (X-ray, UV, optical, millimeter, and radio) of eight GBM-IPTF afterglows. Two of the bursts (GRB 130702A / iPTF13bxl and GRB 140606B / iPTF14bfu) are at low redshift (z=0.145 and z = 0.384, respectively), are sub-luminous with respect to "standard" cosmological bursts, and have spectroscopically confirmed broad-line type Ic supernovae. These two bursts are possibly consistent with mildly relativistic shocks breaking out from the progenitor envelopes rather than the standard mechanism of internal shocks within an ultra-relativistic jet. On a technical level, the GBM--IPTF effort is a prototype for locating and observing optical counterparts of GW events in Advanced LIGO with the Zwicky Transient Facility.