974 resultados para GRASP-CP


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Watt, P., Medieval Women's Writing (Cambridge: Polity Press, 2007) RAE2008

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Monografia apresentada à Universidade Fernando Pessoa para obtenção do grau de Licenciado em Medicina Dentária

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências Farmacêuticas

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Temporal structure in skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefrontal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables, such as time-to-contact. At a fine scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over-shoot the amounts needed for the precise acts. Each context of action may require a much different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive parallel patterns of analog signals. From some parts of the cerebellum, such signals controls muscles. But a recent model shows how the lateral cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (in frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine system design to serve the lowest and the highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between levels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Temporal structure is skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefronatal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables such as time-to-contact. At a finer scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over- shoot the amounts needed for precise acts. Each context of action may require a different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive patterns of analog signals. From some parts of the cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine design to serve the lowest and highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between leveels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since Wireless Sensor Networks (WSNs) are subject to failures, fault-tolerance becomes an important requirement for many WSN applications. Fault-tolerance can be enabled in different areas of WSN design and operation, including the Medium Access Control (MAC) layer and the initial topology design. To be robust to failures, a MAC protocol must be able to adapt to traffic fluctuations and topology dynamics. We design ER-MAC that can switch from energy-efficient operation in normal monitoring to reliable and fast delivery for emergency monitoring, and vice versa. It also can prioritise high priority packets and guarantee fair packet deliveries from all sensor nodes. Topology design supports fault-tolerance by ensuring that there are alternative acceptable routes to data sinks when failures occur. We provide solutions for four topology planning problems: Additional Relay Placement (ARP), Additional Backup Placement (ABP), Multiple Sink Placement (MSP), and Multiple Sink and Relay Placement (MSRP). Our solutions use a local search technique based on Greedy Randomized Adaptive Search Procedures (GRASP). GRASP-ARP deploys relays for (k,l)-sink-connectivity, where each sensor node must have k vertex-disjoint paths of length ≤ l. To count how many disjoint paths a node has, we propose Counting-Paths. GRASP-ABP deploys fewer relays than GRASP-ARP by focusing only on the most important nodes – those whose failure has the worst effect. To identify such nodes, we define Length-constrained Connectivity and Rerouting Centrality (l-CRC). Greedy-MSP and GRASP-MSP place minimal cost sinks to ensure that each sensor node in the network is double-covered, i.e. has two length-bounded paths to two sinks. Greedy-MSRP and GRASP-MSRP deploy sinks and relays with minimal cost to make the network double-covered and non-critical, i.e. all sensor nodes must have length-bounded alternative paths to sinks when an arbitrary sensor node fails. We then evaluate the fault-tolerance of each topology in data gathering simulations using ER-MAC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work presented in this thesis covers four major topics of research related to the grid integration of wave energy. More specifically, the grid impact of a wave farm on the power quality of its local network is investigated. Two estimation methods were developed regarding the flicker level Pst generated by a wave farm in relation to its rated power as well as in relation to the impedance angle ψk of the node in the grid to which it is connected. The electrical design of a typical wave farm design is also studied in terms of minimum rating for three types of costly pieces of equipment, namely the VAr compensator, the submarine cables and the overhead line. The power losses dissipated within the farm's electrical network are also evaluated. The feasibility of transforming a test site into a commercial site of greater rated power is investigated from the perspective of power quality and of cables and overhead line thermal loading. Finally, the generic modelling of ocean devices, referring here to both wave and tidal current devices, is investigated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this thesis is the exploration and characterisation of the nanoscale electronic properties of conjugated polymers and nanocrystals. In Chapter 2, the first application of conducting-probe atomic force microscopy (CP-AFM)-based displacement-voltage (z-V) spectroscopy to local measurement of electronic properties of conjugated polymer thin films is reported. Charge injection thresholds along with corresponding single particle gap and exciton binding energies are determined for a poly[2-methoxy-5-(2-ethylhexyloxy)-1,4-phenylenevinylene] thin film. By performing measurements across a grid of locations on the film, a series of exciton binding energy distributions are identified. The variation in measured exciton binding energies is in contrast to the smoothness of the film suggesting that the variation may be attributable to differences in the nano-environment of the polymer molecules within the film at each measurement location. In Chapter 3, the CP-AFM-based z-V spectroscopy method is extended for the first time to local, room temperature measurements of the Coulomb blockade voltage thresholds arising from sequential single electron charging of 28 kDa Au nanocrystal arrays. The fluid-like properties of the nanocrystal arrays enable reproducible formation of nanoscale probe-array-substrate junctions, allowing the influence of background charge on the electronic properties of the array to be identified. CP-AFM also allows complementary topography and phase data to be acquired before and after spectroscopy measurements, enabling comparison of local array morphology with local measurements of the Coulomb blockade thresholds. In Chapter 4, melt-assisted template wetting is applied for the first time to massively parallel fabrication of poly-(3-hexylthiophene) nanowires. The structural characteristics of the wires are first presented. Two-terminal electrical measurements of individual nanowires, utilising a CP-AFM tip as the source electrode, are then used to obtain the intrinsic nanowire resistivity and the total nanowire-electrode contact resistance subsequently allowing single nanowire hole mobility and mean nanowire-electrode barrier height values to be estimated. In Chapter 5, solution-assisted template wetting is used for fabrication of fluorene-dithiophene co-polymer nanowires. The structural characteristics of these wires are also presented. Two-terminal electrical measurements of individual nanowires indicate barrier formation at the nanowire-electrode interfaces and measured resistivity values suggest doping of the nanowires, possibly due to air exposure. The first report of single conjugated polymer nanowires as ultra-miniature photodetectors is presented, with single wire devices yielding external quantum efficiencies ~ 0.1 % and responsivities ~ 0.4 mA/W under monochromatic illumination.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Thin film dielectrics based on titanium, zirconium or hafnium oxides are being introduced to increase the permittivity of insulating layers in transistors for micro/nanoelectronics and memory devices. Atomic layer deposition (ALD) is the process of choice for fabricating these films, as it allows for high control of composition and thickness in thin, conformal films which can be deposited on substrates with high aspect-ratio features. The success of this method depends crucially on the chemical properties of the precursor molecules. A successful ALD precursor should be volatile, stable in the gas-phase, but reactive on the substrate and growing surface, leading to inert by-products. In recent years, many different ALD precursors for metal oxides have been developed, but many of them suffer from low thermal stability. Much promise is shown by group 4 metal precursors that contain cyclopentadienyl (Cp = C5H5-xRx) ligands. One of the main advantages of Cp precursors is their thermal stability. In this work ab initio calculations were carried out at the level of density functional theory (DFT) on a range of heteroleptic metallocenes [M(Cp)4-n(L)n], M = Hf/Zr/Ti, L = Me and OMe, in order to find mechanistic reasons for their observed behaviour during ALD. Based on optimized monomer structures, reactivity is analyzed with respect to ligand elimination. The order in which different ligands are eliminated during ALD follows their energetics which was in agreement with experimental measurements. Titanocene-derived precursors, TiCp*(OMe)3, do not yield TiO2 films in atomic layer deposition (ALD) with water, while Ti(OMe)4 does. DFT was used to model the ALD reaction sequence and find the reason for the difference in growth behaviour. Both precursors adsorb initially via hydrogen-bonding. The simulations reveal that the Cp* ligand of TiCp*(OMe)3 lowers the Lewis acidity of the Ti centre and prevents its coordination to surface O (densification) during both of the ALD pulses. Blocking this step hindered further ALD reactions and for that reason no ALD growth is observed from TiCp*(OMe)3 and water. The thermal stability in the gas phase of Ti, Zr and Hf precursors that contain cyclopentadienyl ligands was also considered. The reaction that was found using DFT is an intramolecular α-H transfer that produces an alkylidene complex. The analysis shows that thermal stabilities of complexes of the type MCp2(CH3)2 increase down group 4 (M = Ti, Zr and Hf) due to an increase in the HOMO-LUMO band gap of the reactants, which itself increases with the electrophilicity of the metal. The reverse reaction of α-hydrogen abstraction in ZrCp2Me2 is 1,2-addition reaction of a C-H bond to a Zr=C bond. The same mechanism is investigated to determine if it operates for 1,2 addition of the tBu C-H across Hf=N in a corresponding Hf dimer complex. The aim of this work is to understand orbital interactions, how bonds break and how new bonds form, and in what state hydrogen is transferred during the reaction. Calculations reveal two synchronous and concerted electron transfers within a four-membered cyclic transition state in the plane between the cyclopentadienyl rings, one π(M=X)-to-σ(M-C) involving metal d orbitals and the other σ(C-H)-to-σ(X-H) mediating the transfer of neutral H, where X = C or N. The reaction of the hafnium dimer complex with CO that was studied for the purpose of understanding C-H bond activation has another interesting application, namely the cleavage of an N-N bond and resulting N-C bond formation. Analysis of the orbital plots reveals repulsion between the occupied orbitals on CO and the N-N unit where CO approaches along the N-N axis. The repulsions along the N-N axis are minimized by instead forming an asymmetrical intermediate in which CO first coordinates to one Hf and then to N. This breaks the symmetry of the N-N unit and the resultant mixing of MOs allows σ(NN) to be polarized, localizing electrons on the more distant N. This allowed σ(CO) and π(CO) donation to N and back-donation of π*(Hf2N2) to CO. Improved understanding of the chemistry of metal complexes can be gained from atomic-scale modelling and this provides valuable information for the design of new ALD precursors. The information gained from the model decomposition pathway can be additionally used to understand the chemistry of molecules in the ALD process as well as in catalytic systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The landscape of late medieval Ireland, like most places in Europe, was characterized by intensified agricultural exploitation, the growth and founding of towns and cities and the construction of large stone edifices, such as castles and monasteries. None of these could have taken place without iron. Axes were needed for clearing woodland, ploughs for turning the soil, saws for wooden buildings and hammers and chisels for the stone ones, all of which could not realistically have been made from any other material. The many battles, waged with ever increasingly sophisticated weaponry, needed a steady supply of iron and steel. During the same period, the European iron industry itself underwent its most fundamental transformation since its inception; at the beginning of the period it was almost exclusively based on small furnaces producing solid blooms and by the turn of the seventeenth century it was largely based on liquid-iron production in blast-furnaces the size of a house. One of the great advantages of studying the archaeology of ironworking is that its main residue, slag, is often produced in copious amounts both during smelting and smithing, is virtually indestructible and has very little secondary use. This means that most sites where ironworking was carried out are readily recognizable as such by the occurrence of this slag. Moreover, visual examination can distinguish between various types of slag, which are often characteristic for the activity from which they derive. The ubiquity of ironworking in the period under study further means that we have large amounts of residues available for study, allowing us to distinguish patterns both inside assemblages and between sites. Disadvantages of the nature of the remains related to ironworking include the poor preservation of the installations used, especially the furnaces, which were often built out of clay and located above ground. Added to this are the many parameters contributing to the formation of the above-mentioned slag, making its composition difficult to connect to a certain technology or activity. Ironworking technology in late medieval Ireland has thus far not been studied in detail. Much of the archaeological literature on the subject is still tainted by the erroneous attribution of the main type of slag, bun-shaped cakes, to smelting activities. The large-scale infrastructure works of the first decade of the twenty-first century have led to an exponential increase in the amount of sites available for study. At the same time, much of the material related to metalworking recovered during these boom-years was subjected to specialist analysis. This has led to a near-complete overhaul of our knowledge of early ironworking in Ireland. Although many of these new insights are quickly seeping into the general literature, no concise overviews on the current understanding of the early Irish ironworking technology have been published to date. The above then presented a unique opportunity to apply these new insights to the extensive body of archaeological data we now possess. The resulting archaeological information was supplemented with, and compared to, that contained in the historical sources relating to Ireland for the same period. This added insights into aspects of the industry often difficult to grasp solely through the archaeological sources, such as the people involved and the trade in iron. Additionally, overviews on several other topics, such as a new distribution map of Irish iron ores and a first analysis of the information on iron smelting and smithing in late medieval western Europe, were compiled to allow this new knowledge on late medieval Irish ironworking to be put into a wider context. Contrary to current views, it appears that it is not smelting technology which differentiates Irish ironworking from the rest of Europe in the late medieval period, but its smithing technology and organisation. The Irish iron-smelting furnaces are generally of the slag-tapping variety, like their other European counterparts. Smithing, on the other hand, is carried out at ground-level until at least the sixteenth century in Ireland, whereas waist-level hearths become the norm further afield from the fourteenth century onwards. Ceramic tuyeres continue to be used as bellows protectors, whereas these are unknown elsewhere on the continent. Moreover, the lack of market centres at different times in late medieval Ireland, led to the appearance of isolated rural forges, a type of site unencountered in other European countries during that period. When these market centres are present, they appear to be the settings where bloom smithing is carried out. In summary, the research below not only offered us the opportunity to give late medieval ironworking the place it deserves in the broader knowledge of Ireland's past, but it also provided both a base for future research within the discipline, as well as a research model applicable to different time periods, geographical areas and, perhaps, different industries..

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We revisit the well-known problem of sorting under partial information: sort a finite set given the outcomes of comparisons between some pairs of elements. The input is a partially ordered set P, and solving the problem amounts to discovering an unknown linear extension of P, using pairwise comparisons. The information-theoretic lower bound on the number of comparisons needed in the worst case is log e(P), the binary logarithm of the number of linear extensions of P. In a breakthrough paper, Jeff Kahn and Jeong Han Kim (STOC 1992) showed that there exists a polynomial-time algorithm for the problem achieving this bound up to a constant factor. Their algorithm invokes the ellipsoid algorithm at each iteration for determining the next comparison, making it impractical. We develop efficient algorithms for sorting under partial information. Like Kahn and Kim, our approach relies on graph entropy. However, our algorithms differ in essential ways from theirs. Rather than resorting to convex programming for computing the entropy, we approximate the entropy, or make sure it is computed only once in a restricted class of graphs, permitting the use of a simpler algorithm. Specifically, we present: an O(n2) algorithm performing O(log n·log e(P)) comparisons; an O(n2.5) algorithm performing at most (1+ε) log e(P) + Oε(n) comparisons; an O(n2.5) algorithm performing O(log e(P)) comparisons. All our algorithms are simple to implement. © 2010 ACM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increasing need for cross sections far from the valley of stability, especially for applications such as nuclear astrophysics, poses a challenge for nuclear reaction models. So far, predictions of cross sections have relied on more or less phenomenological approaches, depending on parameters adjusted to available experimental data or deduced from systematic relations. While such predictions are expected to be reliable for nuclei not too far from the experimentally known regions, it is clearly preferable to use more fundamental approaches, based on sound physical bases, when dealing with very exotic nuclei. Thanks to the high computer power available today, all major ingredients required to model a nuclear reaction can now be (and have been) microscopically (or semi-microscopically) determined starting from the information provided by an effective nucleon-nucleon interaction. All these microscopic ingredients have been included in the latest version of the TALYS nuclear reaction code (http://www.talys.eu/).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The combinatorial model of nuclear level densities has now reached a level of accuracy comparable to that of the best global analytical expressions without suffering from the limits imposed by the statistical hypothesis on which the latter expressions rely. In particular, it provides, naturally, non-Gaussian spin distribution as well as non-equipartition of parities which are known to have an impact on cross section predictions at low energies [1, 2, 3]. Our previous global models developed in Refs. [1, 2] suffered from deficiencies, in particular in the way the collective effects - both vibrational and rotational - were treated. We have recently improved this treatment using simultaneously the single-particle levels and collective properties predicted by a newly derived Gogny interaction [4], therefore enabling a microscopic description of energy-dependent shell, pairing and deformation effects. In addition for deformed nuclei, the transition to sphericity is coherently taken into account on the basis of a temperature-dependent Hartree-Fock calculation which provides at each temperature the structure properties needed to build the level densities. This new method is described and shown to give promising results with respect to available experimental data.