912 resultados para Bias-Variance Trade-off


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de Mestrado apresentada ao Instituto Superior de Psicologia Aplicada para obtenção de grau de Mestre na especialidade de Psicologia Social e das Organizações.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Integrated circuit scaling has enabled a huge growth in processing capability, which necessitates a corresponding increase in inter-chip communication bandwidth. As bandwidth requirements for chip-to-chip interconnection scale, deficiencies of electrical channels become more apparent. Optical links present a viable alternative due to their low frequency-dependent loss and higher bandwidth density in the form of wavelength division multiplexing. As integrated photonics and bonding technologies are maturing, commercialization of hybrid-integrated optical links are becoming a reality. Increasing silicon integration leads to better performance in optical links but necessitates a corresponding co-design strategy in both electronics and photonics. In this light, holistic design of high-speed optical links with an in-depth understanding of photonics and state-of-the-art electronics brings their performance to unprecedented levels. This thesis presents developments in high-speed optical links by co-designing and co-integrating the primary elements of an optical link: receiver, transmitter, and clocking.

In the first part of this thesis a 3D-integrated CMOS/Silicon-photonic receiver will be presented. The electronic chip features a novel design that employs a low-bandwidth TIA front-end, double-sampling and equalization through dynamic offset modulation. Measured results show -14.9dBm of sensitivity and energy efficiency of 170fJ/b at 25Gb/s. The same receiver front-end is also used to implement source-synchronous 4-channel WDM-based parallel optical receiver. Quadrature ILO-based clocking is employed for synchronization and a novel frequency-tracking method that exploits the dynamics of IL in a quadrature ring oscillator to increase the effective locking range. An adaptive body-biasing circuit is designed to maintain the per-bit-energy consumption constant across wide data-rates. The prototype measurements indicate a record-low power consumption of 153fJ/b at 32Gb/s. The receiver sensitivity is measured to be -8.8dBm at 32Gb/s.

Next, on the optical transmitter side, three new techniques will be presented. First one is a differential ring modulator that breaks the optical bandwidth/quality factor trade-off known to limit the speed of high-Q ring modulators. This structure maintains a constant energy in the ring to avoid pattern-dependent power droop. As a first proof of concept, a prototype has been fabricated and measured up to 10Gb/s. The second technique is thermal stabilization of micro-ring resonator modulators through direct measurement of temperature using a monolithic PTAT temperature sensor. The measured temperature is used in a feedback loop to adjust the thermal tuner of the ring. A prototype is fabricated and a closed-loop feedback system is demonstrated to operate at 20Gb/s in the presence of temperature fluctuations. The third technique is a switched-capacitor based pre-emphasis technique designed to extend the inherently low bandwidth of carrier injection micro-ring modulators. A measured prototype of the optical transmitter achieves energy efficiency of 342fJ/bit at 10Gb/s and the wavelength stabilization circuit based on the monolithic PTAT sensor consumes 0.29mW.

Lastly, a first-order frequency synthesizer that is suitable for high-speed on-chip clock generation will be discussed. The proposed design features an architecture combining an LC quadrature VCO, two sample-and-holds, a PI, digital coarse-tuning, and rotational frequency detection for fine-tuning. In addition to an electrical reference clock, as an extra feature, the prototype chip is capable of receiving a low jitter optical reference clock generated by a high-repetition-rate mode-locked laser. The output clock at 8GHz has an integrated RMS jitter of 490fs, peak-to-peak periodic jitter of 2.06ps, and total RMS jitter of 680fs. The reference spurs are measured to be –64.3dB below the carrier frequency. At 8GHz the system consumes 2.49mW from a 1V supply.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The great recession of 2008/2009 has had a huge impact on unemployment and public finances in most advanced countries, and these impacts were magnified in the southern Euro area countries by the sovereign debt crisis of 2010/2011. The fiscal consolidation imposed by the European Union on highly indebted countries was based on the assumptions of the so-called expansionary austerity. However, the reality so far shows proof to the contrary, and the results of this paper support the opposing view of a self- defeating austerity. Based on the input-output relations of the productive system, an unemployment rate/budget balance trade-off equation is derived, as well as the impact of a strong fiscal consolidation based on social transfers and the notion of neutral budget balance. An application to the Portuguese case confirms the huge costs of a strong fiscal consolidation, both in terms of unemployment and social policy regress, and it allows one to conclude that too much consolidation in one year makes consolidation more difficult in the following year.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mestrado em Finanças

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de Mestrado, Finanças Empresariais, Faculdade de Economia, Universidade do Algarve, 2016

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New materials for OLED applications with low singlet–triplet energy splitting have been recently synthesized in order to allow for the conversion of triplet into singlet excitons (emitting light) via a Thermally Activated Delayed Fluorescence (TADF) process, which involves excited-states with a non-negligible amount of Charge-Transfer (CT). The accurate modeling of these states with Time-Dependent Density Functional Theory (TD-DFT), the most used method so far because of the favorable trade-off between accuracy and computational cost, is however particularly challenging. We carefully address this issue here by considering materials with small (high) singlet–triplet gap acting as emitter (host) in OLEDs and by comparing the accuracy of TD-DFT and the corresponding Tamm-Dancoff Approximation (TDA), which is found to greatly reduce error bars with respect to experiments thanks to better estimates for the lowest singlet–triplet transition. Finally, we quantitatively correlate the singlet–triplet splitting values with the extent of CT, using for it a simple metric extracted from calculations with double-hybrid functionals, that might be applied in further molecular engineering studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a model, based on the work of Brock and Durlauf, which looks at how agents make choices between competing technologies, as a framework for exploring aspects of the economics of the adoption of privacy-enhancing technologies. In order to formulate a model of decision-making among choices of technologies by these agents, we consider the following: context, the setting in which and the purpose for which a given technology is used; requirement, the level of privacy that the technology must provide for an agent to be willing to use the technology in a given context; belief, an agent’s perception of the level of privacy provided by a given technology in a given context; and the relative value of privacy, how much an agent cares about privacy in this context and how willing an agent is to trade off privacy for other attributes. We introduce these concepts into the model, admitting heterogeneity among agents in order to capture variations in requirement, belief, and relative value in the population. We illustrate the model with two examples: the possible effects on the adoption of iOS devices being caused by the recent Apple–FBI case; and the recent revelations about the non-deletion of images on the adoption of Snapchat.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Random Walk with Restart (RWR) is an appealing measure of proximity between nodes based on graph structures. Since real graphs are often large and subject to minor changes, it is prohibitively expensive to recompute proximities from scratch. Previous methods use LU decomposition and degree reordering heuristics, entailing O(|V|^3) time and O(|V|^2) memory to compute all (|V|^2) pairs of node proximities in a static graph. In this paper, a dynamic scheme to assess RWR proximities is proposed: (1) For unit update, we characterize the changes to all-pairs proximities as the outer product of two vectors. We notice that the multiplication of an RWR matrix and its transition matrix, unlike traditional matrix multiplications, is commutative. This can greatly reduce the computation of all-pairs proximities from O(|V|^3) to O(|delta|) time for each update without loss of accuracy, where |delta| (<<|V|^2) is the number of affected proximities. (2) To avoid O(|V|^2) memory for all pairs of outputs, we also devise efficient partitioning techniques for our dynamic model, which can compute all pairs of proximities segment-wisely within O(l|V|) memory and O(|V|/l) I/O costs, where 1<=l<=|V| is a user-controlled trade-off between memory and I/O costs. (3) For bulk updates, we also devise aggregation and hashing methods, which can discard many unnecessary updates further and handle chunks of unit updates simultaneously. Our experimental results on various datasets demonstrate that our methods can be 1–2 orders of magnitude faster than other competitors while securing scalability and exactness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Minimization of undesirable temperature gradients in all dimensions of a planar solid oxide fuel cell (SOFC) is central to the thermal management and commercialization of this electrochemical reactor. This article explores the effective operating variables on the temperature gradient in a multilayer SOFC stack and presents a trade-off optimization. Three promising approaches are numerically tested via a model-based sensitivity analysis. The numerically efficient thermo-chemical model that had already been developed by the authors for the cell scale investigations (Tang et al. Chem. Eng. J. 2016, 290, 252-262) is integrated and extended in this work to allow further thermal studies at commercial scales. Initially, the most common approach for the minimization of stack's thermal inhomogeneity, i.e., usage of the excess air, is critically assessed. Subsequently, the adjustment of inlet gas temperatures is introduced as a complementary methodology to reduce the efficiency loss due to application of excess air. As another practical approach, regulation of the oxygen fraction in the cathode coolant stream is examined from both technical and economic viewpoints. Finally, a multiobjective optimization calculation is conducted to find an operating condition in which stack's efficiency and temperature gradient are maximum and minimum, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Lusitanian toadfish, Halobatrachus didactylus, like other batrachoidids, is a benthic fish species with nesting behaviour during the breeding season. During this prolonged period it engages in mating activities and remains in the nest providing parental care. It is not known whether males feed while providing parental care but it is likely that their limited mobility may restrict their diet and influence their fitness. As a consequence, egg cannibalism could occur as a life-history strategy. The aim of the present study is to ascertain the feeding behaviour of nesting males, in comparison to mature non-nesting males, and to identify potential life-history traits related to egg cannibalism. Nest-holders were sampled from artificial nests placed in an intertidal area of the Tagus estuary, only exposed during spring low tides. The diet of nest-holders was compared with that of non-nesting mature males from the same area, captured by otter trawl. The present study demonstrates that despite their constrained mobility nest-holders feed during the breeding season, although in a more opportunistic fashion than non-nesting males. Nest-holders showed a generalist feeding behaviour, with a more heterogeneous diet. Egg cannibalism was not related to male condition, paternity or brood size but showed a higher incidence early in the season when water temperatures were lower. The results suggest a possible seasonal trade-off strategy between care and energy recovery, triggered by environmental factors, where under unfavourable conditions to sustain viable eggs the male may recover energy by eating eggs, thus benefiting future reproductive success, later in the season.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article explores Ulrich Beck’s theorisation of risk society through focusing on the way in which the risk of Bt cotton is legitimated by six cultivators in Bantala, a village in Warangal, Andhra Pradesh, in India. The fieldwork for this study was conducted between June 2010 and March 2011, a duration chosen to coincide with a cotton season. The study explores the experience of the cultivators using the ‘categories of legitimation’ defined by Van Leeuwen. These are authorisation, moral evaluation, rationalisation and mythopoesis. As well as permitting an exploration of the legitimation of Bt cotton by cultivators themselves within the high-risk context of the Indian agrarian crisis, the categories also serve as an analytical framework with which to structure a discourse analysis of participant perspectives. The study examines the complex trade-off, which Renn argues the legitimation of ambiguous risk, such as that associated with Bt technology, entails. The research explores the way in which legitimation of the technology is informed by wider normative conceptualisations of development. This highlights that, in a context where indebtedness is strongly linked to farmer suicides, the potential of Bt cotton for poverty alleviation is traded against the uncertainty associated with the technology’s risks, which include its purported links to animal deaths. The study highlights the way in which the wider legitimation of a neoliberal approach to development in Andhra Pradesh serves to reinforce the choice of Bt cotton, and results in a depoliticisation of risk in Bantala. The research indicates, however, that this trade-off is subject to change over time, as economic benefits wane and risks accumulate. It also highlights the need for caution in relation to the proposed extension of Bt technology to food crops, such as Bt brinjal (aubergine).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract : Recently, there is a great interest to study the flow characteristics of suspensions in different environmental and industrial applications, such as snow avalanches, debris flows, hydrotransport systems, and material casting processes. Regarding rheological aspects, the majority of these suspensions, such as fresh concrete, behave mostly as non-Newtonian fluids. Concrete is the most widely used construction material in the world. Due to the limitations that exist in terms of workability and formwork filling abilities of normal concrete, a new class of concrete that is able to flow under its own weight, especially through narrow gaps in the congested areas of the formwork was developed. Accordingly, self-consolidating concrete (SCC) is a novel construction material that is gaining market acceptance in various applications. Higher fluidity characteristics of SCC enable it to be used in a number of special applications, such as densely reinforced sections. However, higher flowability of SCC makes it more sensitive to segregation of coarse particles during flow (i.e., dynamic segregation) and thereafter at rest (i.e., static segregation). Dynamic segregation can increase when SCC flows over a long distance or in the presence of obstacles. Therefore, there is always a need to establish a trade-off between the flowability, passing ability, and stability properties of SCC suspensions. This should be taken into consideration to design the casting process and the mixture proportioning of SCC. This is called “workability design” of SCC. An efficient and non-expensive workability design approach consists of the prediction and optimization of the workability of the concrete mixtures for the selected construction processes, such as transportation, pumping, casting, compaction, and finishing. Indeed, the mixture proportioning of SCC should ensure the construction quality demands, such as demanded levels of flowability, passing ability, filling ability, and stability (dynamic and static). This is necessary to develop some theoretical tools to assess under what conditions the construction quality demands are satisfied. Accordingly, this thesis is dedicated to carry out analytical and numerical simulations to predict flow performance of SCC under different casting processes, such as pumping and tremie applications, or casting using buckets. The L-Box and T-Box set-ups can evaluate flow performance properties of SCC (e.g., flowability, passing ability, filling ability, shear-induced and gravitational dynamic segregation) in casting process of wall and beam elements. The specific objective of the study consists of relating numerical results of flow simulation of SCC in L-Box and T-Box test set-ups, reported in this thesis, to the flow performance properties of SCC during casting. Accordingly, the SCC is modeled as a heterogeneous material. Furthermore, an analytical model is proposed to predict flow performance of SCC in L-Box set-up using the Dam Break Theory. On the other hand, results of the numerical simulation of SCC casting in a reinforced beam are verified by experimental free surface profiles. The results of numerical simulations of SCC casting (modeled as a single homogeneous fluid), are used to determine the critical zones corresponding to the higher risks of segregation and blocking. The effects of rheological parameters, density, particle contents, distribution of reinforcing bars, and particle-bar interactions on flow performance of SCC are evaluated using CFD simulations of SCC flow in L-Box and T-box test set-ups (modeled as a heterogeneous material). Two new approaches are proposed to classify the SCC mixtures based on filling ability and performability properties, as a contribution of flowability, passing ability, and dynamic stability of SCC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis examines firms' real decisions using a large panel of unquoted euro area firms over the period 2003-2011. To this end, this thesis is composed of five chapters in which three are the main empirical chapters. They assess the dimensions of firm behaviour across different specifications. Each of these chapters provide a detailed discussion on the contribution, theoretical and empirical background as well as the panel data techniques which are implemented. Chapter 1 describes the introduction and outline of the thesis. Chapter 2 presents an empirical analysis on the link between financial pressure and firms' employment level. In this set-up, it is explored the strength of financial pressure during the financial crisis. It is also tested whether this effect has a different impact for financially constrained and unconstrained firms in the periphery and non-periphery regions. The results of this chapter denote that financial pressure exerts a negative impact on firms' employment decisions and that this effect is stronger during the crisis for financially constrained firms in the periphery. Chapter 3 analyses the cash policies of private and public firms. Controlling for firm size and other standard variables in the literature of cash holdings, empirical findings suggest that private firms hold higher cash reserves than their public counterparts indicating a greater precautionary demand for cash by the former. The relative difference between these two type of firms decreases (increases) the higher (lower) is the the level of financial pressure. The findings are robust to various model specifications and over different sub-samples. Overall, this chapter shows the relevance of firms' size. Taken together, the findings of Chapter 3 are in line with the early literature on cash holdings and contradict the recent studies, which find that the precautionary motive to hold cash is less pronounced for private firms than for public ones. Chapter 4 undertakes an investigation on the relation between firms' stocks of inventories and trade credit (i.e. extended and taken) whilst controlling for the firms' size, the characteristics of the goods transacted, the recent financial crisis and the development of the banking system. The main findings provide evidence of a trade-off between trade credit extended and firms' stock of inventories. In other words, firms' prefer to extend credit in the form of stocks to their financially constrained customers to avoid holdings costly inventories and to increase their sales levels. The provision of trade credit by the firms also depends on the characteristics of the goods transacted. This impact is stronger during the crisis. Larger and liquid banking systems reduce the trade-off between the volume of stocks of inventories and the amount sold on credit. Trade credit taken is not affected by firms' stock of inventories. Chapter 5 presents the conclusions of the thesis. It provides the main contributions, implications and future research of each empirical chapter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Se calculó la obtención de las constantes ópticas usando el método de Wolfe. Dichas contantes: coeficiente de absorción (α), índice de refracción (n) y espesor de una película delgada (d ), son de importancia en el proceso de caracterización óptica del material. Se realizó una comparación del método del Wolfe con el método empleado por R. Swanepoel. Se desarrolló un modelo de programación no lineal con restricciones, de manera que fue posible estimar las constantes ópticas de películas delgadas semiconductoras, a partir únicamente, de datos de transmisión conocidos. Se presentó una solución al modelo de programación no lineal para programación cuadrática. Se demostró la confiabilidad del método propuesto, obteniendo valores de α = 10378.34 cm−1, n = 2.4595, d =989.71 nm y Eg = 1.39 Ev, a través de experimentos numéricos con datos de medidas de transmitancia espectral en películas delgadas de Cu3BiS3.