936 resultados para Higher-order functions


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper reports an investigation into the link between failed proofs and non-theorems. It seeks to answer the question of whether anything more can be learned from a failed proof attempt than can be discovered from a counter-example. We suggest that the branch of the proof in which failure occurs can be mapped back to the segments of code that are the culprit, helping to locate the error. This process of tracing provides finer grained isolation of the offending code fragments than is possible from the inspection of counter-examples. We also discuss ideas for how such a process could be automated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Proof critics are a technology from the proof planning paradigm. They examine failed proof attempts in order to extract information which can be used to generate a patch which will allow the proof to go through. We consider the proof of the $quot;whisky problem$quot;, a challenge problem from the domain of temporal logic. The proof requires a generalisation of the original conjecture and we examine two proof critics which can be used to create this generalisation. Using these critics we believe we have produced the first automatic proofs of this challenge problem. We use this example to motivate a comparison of the two critics and propose that there is a place for specialist critics as well as powerful general critics. In particular we advocate the development of critics that do not use meta-variables.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We describe an integration of the SVC decision procedure with the HOL theorem prover. This integration was achieved using the PROSPER toolkit. The SVC decision procedure operates on rational numbers, an axiomatic theory for which was provided in HOL. The decision procedure also returns counterexamples and a framework has been devised for handling counterexamples in a HOL setting.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent developments in the physical parameterizations available in spectral wave models have already been validated, but there is little information on their relative performance especially with focus on the higher order spectral moments and wave partitions. This study concentrates on documenting their strengths and limitations using satellite measurements, buoy spectra, and a comparison between the different models. It is confirmed that all models perform well in terms of significant wave heights; however higher-order moments have larger errors. The partition wave quantities perform well in terms of direction and frequency but the magnitude and directional spread typically have larger discrepancies. The high-frequency tail is examined through the mean square slope using satellites and buoys. From this analysis it is clear that some models behave better than the others, suggesting their parameterizations match the physical processes reasonably well. However none of the models are entirely satisfactory, pointing to poorly constrained parameterizations or missing physical processes. The major space-time differences between the models are related to the swell field stressing the importance of describing its evolution. An example swell field confirms the wave heights can be notably different between model configurations while the directional distributions remain similar. It is clear that all models have difficulty in describing the directional spread. Therefore, knowledge of the source term directional distributions is paramount in improving the wave model physics in the future.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Relational reasoning, or the ability to identify meaningful patterns within any stream of information, is a fundamental cognitive ability associated with academic success across a variety of domains of learning and levels of schooling. However, the measurement of this construct has been historically problematic. For example, while the construct is typically described as multidimensional—including the identification of multiple types of higher-order patterns—it is most often measured in terms of a single type of pattern: analogy. For that reason, the Test of Relational Reasoning (TORR) was conceived and developed to include three other types of patterns that appear to be meaningful in the educational context: anomaly, antinomy, and antithesis. Moreover, as a way to focus on fluid relational reasoning ability, the TORR was developed to include, except for the directions, entirely visuo-spatial stimuli, which were designed to be as novel as possible for the participant. By focusing on fluid intellectual processing, the TORR was also developed to be fairly administered to undergraduate students—regardless of the particular gender, language, and ethnic groups they belong to. However, although some psychometric investigations of the TORR have been conducted, its actual fairness across those demographic groups has yet to be empirically demonstrated. Therefore, a systematic investigation of differential-item-functioning (DIF) across demographic groups on TORR items was conducted. A large (N = 1,379) sample, representative of the University of Maryland on key demographic variables, was collected, and the resulting data was analyzed using a multi-group, multidimensional item-response theory model comparison procedure. Using this procedure, no significant DIF was found on any of the TORR items across any of the demographic groups of interest. This null finding is interpreted as evidence of the cultural-fairness of the TORR, and potential test-development choices that may have contributed to that cultural-fairness are discussed. For example, the choice to make the TORR an untimed measure, to use novel stimuli, and to avoid stereotype threat in test administration, may have contributed to its cultural-fairness. Future steps for psychometric research on the TORR, and substantive research utilizing the TORR, are also presented and discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Why are some companies more successful than others? This thesis approaches the question by enlisting theoretical frameworks that explain the performance with internal factors, deriving from the resource-based view, namely the dynamic capabilities approach. To deepen the understanding of the drivers and barriers towards developing these higher order routines aiming at improving the operational level routines, this thesis explores the organisational culture and identity research for the microfoundational antecedents that might shed light on the formation of the dynamic capabilities. The dynamic capabilities framework in this thesis strives to take the theoretical concept closer to practical applicability. This is achieved through creation of a dynamic capabilities matrix, consisting of four dimensions often encountered in dynamic capabilities literature. The quadrants are formed along internal-external and resources-abilities axes, and consist of Sensing, Learning, Reconfiguration and Partnering facets. A key element of this thesis is the reality continuum, which illustrates the different levels of reality inherent in any entity of human individuals. The theoretical framework constructed in the thesis suggests a link between the collective but constructivist understanding of the organisation and both the operational and higher level routines, evident in the more positivist realm. The findings from three different case organisations suggest that the constructivist assumptions inherent to an organisation function as a generative base for both drivers and barriers towards developing dynamic capabilities. From each organisation one core assumption is scrutinized to identify its connections to the four dimensions of the dynamic capabilities. These connections take the form of drivers or barriers – or have the possibility to develop into one or the other. The main contribution of this thesis is to show that one key for an organisation to perform well in a turbulent setting, is to understand the different levels of realities inherent in any group of people. Recognising the intangible levels gives an advantage in the tangible ones.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider a two-dimensional Fermi-Pasta-Ulam (FPU) lattice with hexagonal symmetry. Using asymptotic methods based on small amplitude ansatz, at third order we obtain a eduction to a cubic nonlinear Schr{\"o}dinger equation (NLS) for the breather envelope. However, this does not support stable soliton solutions, so we pursue a higher-order analysis yielding a generalised NLS, which includes known stabilising terms. We present numerical results which suggest that long-lived stationary and moving breathers are supported by the lattice. We find breather solutions which move in an arbitrary direction, an ellipticity criterion for the wavenumbers of the carrier wave, symptotic estimates for the breather energy, and a minimum threshold energy below which breathers cannot be found. This energy threshold is maximised for stationary breathers, and becomes vanishingly small near the boundary of the elliptic domain where breathers attain a maximum speed. Several of the results obtained are similar to those obtained for the square FPU lattice (Butt \& Wattis, {\em J Phys A}, {\bf 39}, 4955, (2006)), though we find that the square and hexagonal lattices exhibit different properties in regard to the generation of harmonics, and the isotropy of the generalised NLS equation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Using asymptotic methods, we investigate whether discrete breathers are supported by a two-dimensional Fermi-Pasta-Ulam lattice. A scalar (one-component) two-dimensional Fermi-Pasta-Ulam lattice is shown to model the charge stored within an electrical transmission lattice. A third-order multiple-scale analysis in the semi-discrete limit fails, since at this order, the lattice equations reduce to the (2+1)-dimensional cubic nonlinear Schrödinger (NLS) equation which does not support stable soliton solutions for the breather envelope. We therefore extend the analysis to higher order and find a generalised $(2+1)$-dimensional NLS equation which incorporates higher order dispersive and nonlinear terms as perturbations. We find an ellipticity criterion for the wave numbers of the carrier wave. Numerical simulations suggest that both stationary and moving breathers are supported by the system. Calculations of the energy show the expected threshold behaviour whereby the energy of breathers does {\em not} go to zero with the amplitude; we find that the energy threshold is maximised by stationary breathers, and becomes arbitrarily small as the boundary of the domain of ellipticity is approached.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Alzheimer's disease is the most common type of dementia in the elderly; it is characterized by early deficits in learning and memory formation and ultimately leads to a generalised loss of higher cognitive functions. While amyloid beta (Aβ) and tau are traditionally associated with the development of Alzheimer disease, recent studies suggest that other factors, like the intracellular domain (APP-ICD) of the amyloid precursor protein (APP), could play a role. In this study, we investigated whether APP-ICD could affect synaptic transmission and synaptic plasticity in the hippocampus, which is involved in learning and memory processes. Our results indicated that overexpression of APP-ICD in hippocampal CA1 neurons leads to a decrease in evoked AMPA-receptor and NMDA-receptor dependent synaptic transmission. Our study demonstrated that this effect is specific for APP-ICD since its closest homologue APLP2-ICD did not reproduce this effect. In addition, APP-ICD blocks the induction of long term potentiation (LTP) and leads to increased of expression and facilitated induction of long term depression (LTD), while APLP2-ICD shows neither of these effects. Our study showed that this difference observed in synaptic transmission and plasticity between the two intracellular domains resides in the difference of one alanine in the APP-ICD versus a proline in the APLP2-ICD. Exchanging this critical amino-acid through point-mutation, we observed that APP(PAV)-ICD had no longer an effect on synaptic plasticity. We also demonstrated that APLP2(AAV)-ICD mimic the effect of APP-ICD in regards of facilitated LTD. Next we showed that the full length APP-APLP2-APP (APP with a substitution of the Aβ component for its homologous APLP2 part) had no effect on synaptic transmission or synaptic plasticity when compared to the APP-ICD. However, by activating caspase cleavage prior to induction of LTD or LTP, we observed an LTD facilitation and a block of LTP with APP-APLP2-APP, effects that were not seen with the full length APLP2 protein. APP is phosphorylated at threonine 668 (Thr668), which is localized directly after the aforementioned critical alanine and the caspase cleavage site in APP-APLP2-APP. Mutating this Thr668 for an alanine abolishes the effects on LTD and restores LTP induction. Finally, we showed that the facilitation of LTD with APP-APLP2-APP involves ryanodine receptor dependent calcium release from intracellular stores. Taken together, we propose the emergence of a new APP intracellular domain, which plays a critical role in the regulation of synaptic plasticity and by extension, could play a role in the development of memory loss in Alzheimer’s disease.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

International audience

Relevância:

80.00% 80.00%

Publicador:

Resumo:

I study how a larger party within a supply chain could use its superior knowledge about its partner, who is considered to be financially constrained, to help its partner gain access to cheap finance. In particular, I consider two scenarios: (i) Retailer intermediation in supplier finance and (ii) The Effectiveness of Supplier Buy Back Finance. In the fist chapter, I study how a large buyer could help small suppliers obtain financing for their operations. Especially in developing economies, traditional financing methods can be very costly or unavailable to such suppliers. In order to reduce channel costs, in recent years large buyers started to implement their own financing methods that intermediate between suppliers and financing institutions. In this paper, I analyze the role and efficiency of buyer intermediation in supplier financing. Building a game-theoretical model, I show that buyer intermediated financing can significantly improve supply chain performance. Using data from a large Chinese online retailer and through structural regression estimation based on the theoretical analysis, I demonstrate that buyer intermediation induces lower interest rates and wholesale prices, increases order quantities, and boosts supplier borrowing. The analysis also shows that the retailer systematically overestimates the consumer demand. Based on counterfactual analysis, I predict that the implementation of buyer intermediated financing for the online retailer in 2013 improved channel profits by 18.3%, yielding more than $68M projected savings. In the second chapter, I study a novel buy-back financing scheme employed by large manufacturers in some emerging markets. A large manufacturer can secure financing for its budget-constrained downstream partners by assuming a part of the risk for their inventory by committing to buy back some unsold units. Buy back commitment could help a small downstream party secure a bank loan and further induce a higher order quantity through better allocation of risk in the supply chain. However, such a commitment may undermine the supply chain performance as it imposes extra costs on the supplier incurred by the return of large or costly-to-handle items. I first theoretically analyze the buy-back financing contract employed by a leading Chinese automative manufacturer and some variants of this contracting scheme. In order to measure the effectiveness of buy-back financing contracts, I utilize contract and sales data from the company and structurally estimate the theoretical model. Through counterfactual analysis, I study the efficiency of various buy-back financing schemes and compare them to traditional financing methods. I find that buy-back contract agreements can improve channel efficiency significantly compared to simple contracts with no buy-back, whether the downstream retailer can secure financing on its own or not.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Les septines sont des GTPases conservées dérégulées dans le cancer et les maladies neurodégénératives. Elles servent de protéines d’échafaudage et forment une barrière de diffusion à la membrane plasmique et au corps central lors de la cytokinèse. Elles interagissent avec l’actine et s’organisent en complexes qui polymérisent et forment des structures hautement organisées (anneaux et filaments). Leur dynamique d’assemblage et leur rôle dans la cellule restent à être élucidés. La Drosophile est un modèle simple pour l’étude des septines puisqu’on n’y retrouve que 5 gènes (sep1, sep2, sep4, sep5, peanut) comparativement aux 13 gènes chez l’humain. À l’aide d’un anticorps contre Pnut, nous avons identifié des structures tubulaires dans 30% des cellules S2 de Drosophile. Mon projet a comme but de caractériser ces tubes en élucidant leurs constituants, leur comportement et leurs propriétés pour mieux clarifier le mécanisme par lequel les septines forment des structures hautement organisées et interagissent avec le cytosquelette d’actine. Par immunofluorescence, j’ai pu démontrer que ces tubes sont cytoplasmiques, en mitose ou interphase, ce qui suggère qu’ils ne sont pas régulés par le cycle cellulaire. Pour investiguer la composition et les propriétés dynamiques de ces tubes, j’ai généré une lignée cellulaire exprimant Sep2-GFP qui se localise aux tubes et des ARNi contre les cinq septines. Trois septines sont importantes pour la formation de ces tubes et anneaux notamment Sep1, Sep2 et Pnut. La déplétion de Sep1 cause la dispersion du signal GFP en flocons, tandis que la déplétion de Sep2 ou de Pnut mène à la dispersion du signal GFP uniformément dans la cellule. Des expériences de FRAP sur la lignée Sep2-GFP révèlent un signal de retour très lent, ce qui indique que ces structures sont très stables. J’ai aussi démontré une relation entre l’actine et les septines. Le traitement avec la Latrunculin A (un inhibiteur de la polymérisation de l’actine) ou la Jasplakinolide (un stabilisateur des filaments d’actine) mène à la dépolymérisation rapide (< 30 min) des tubes en anneaux flottants dans le cytoplasme, même si ces tubes ne sont pas reconnus suite à un marquage de la F-actine. L’Actin05C-mCherry se localise aux tubes, tandis que le mutant déficient de la polymérisation, Actin05C-R62D-mCherry perd cette localisation. On observe aussi que la déplétion de la Cofiline et de l’AIP1 (ce qui déstabilise l’actine) mène au même phénotype que le traitement avec la Latrunculine A ou la Jasplakinolide. Alors on peut conclure qu’un cytosquelette d’actine dynamique est nécessaire pour la formation et le maintien des tubes de septines. Les futures études auront comme but de mieux comprendre l’organisation des septines en structures hautement organisées et leur relation avec l’actine. Ceci sera utile pour l’élaboration du réseau d’interactions des septines qui pourra servir à expliquer leur dérégulation dans le cancer et les maladies neurodégénératives.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cloud edge mixing plays an important role in the life cycle and development of clouds. Entrainment of subsaturated air affects the cloud at the microscale, altering the number density and size distribution of its droplets. The resulting effect is determined by two timescales: the time required for the mixing event to complete, and the time required for the droplets to adjust to their new environment. If mixing is rapid, evaporation of droplets is uniform and said to be homogeneous in nature. In contrast, slow mixing (compared to the adjustment timescale) results in the droplets adjusting to the transient state of the mixture, producing an inhomogeneous result. Studying this process in real clouds involves the use of airborne optical instruments capable of measuring clouds at the `single particle' level. Single particle resolution allows for direct measurement of the droplet size distribution. This is in contrast to other `bulk' methods (i.e. hot-wire probes, lidar, radar) which measure a higher order moment of the distribution and require assumptions about the distribution shape to compute a size distribution. The sampling strategy of current optical instruments requires them to integrate over a path tens to hundreds of meters to form a single size distribution. This is much larger than typical mixing scales (which can extend down to the order of centimeters), resulting in difficulties resolving mixing signatures. The Holodec is an optical particle instrument that uses digital holography to record discrete, local volumes of droplets. This method allows for statistically significant size distributions to be calculated for centimeter scale volumes, allowing for full resolution at the scales important to the mixing process. The hologram also records the three dimensional position of all particles within the volume, allowing for the spatial structure of the cloud volume to be studied. Both of these features represent a new and unique view into the mixing problem. In this dissertation, holographic data recorded during two different field projects is analyzed to study the mixing structure of cumulus clouds. Using Holodec data, it is shown that mixing at cloud top can produce regions of clear but humid air that can subside down along the edge of the cloud as a narrow shell, or advect down shear as a `humid halo'. This air is then entrained into the cloud at lower levels, producing mixing that appears to be very inhomogeneous. This inhomogeneous-like mixing is shown to be well correlated with regions containing elevated concentrations of large droplets. This is used to argue in favor of the hypothesis that dilution can lead to enhanced droplet growth rates. I also make observations on the microscale spatial structure of observed cloud volumes recorded by the Holodec.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation presents detailed experimental and theoretical investigations of nonlinear and nonreciprocal effects in magnetic garnet films. The dissertation thus comprises two major sections. The first section concentrates on the study of a new class of nonlinear magneto-optic thin film materials possessing strong higher order magnetic susceptibility for nonlinear optical applications. The focus was on enlarging the nonlinear performance of ferrite garnet films by strain generation and compositional gradients in the sputter-deposition growth of these films. Under this project several bismuth-substituted yttrium iron garnet (Bi,Y) 3 (Fe,Ga)5 O12(acronym as Bi:YIG) films have been sputter-deposited over gadolinium gallium garnet (Gd 3 Ga5 O12 ) substrates and characterized for their nonlinear optical response. One of the important findings of this work is that lattice mismatch strain drives the second harmonic (SH) signal in the Bi:YIG films, in agreement with theoretical predictions; whereas micro-strain was found not to correlate significantly with SH signal at the micro-strain levels present in these films. This study also elaborates on the role of the film's constitutive elements and their concentration gradients in nonlinear response of the films. Ultrahigh sensitivity delivered by second harmonic generation provides a new exciting tool for studying magnetized surfaces and buried interfaces, making this work important from both a fundamental and application point of view. The second part of the dissertation addresses an important technological need; namely the development of an on-chip optical isolator for use in photonic integrated circuits. It is based on two related novel effects, nonreciprocal and unidirectional optical Bloch oscillations (BOs), recently proposed and developed by Professor Miguel Levy and myself. This dissertation work has established a comprehensive theoretical background for the implementation of these effects in magneto-optic waveguide arrays. The model systems we developed consist of photonic lattices in the form of one-dimensional waveguide arrays where an optical force is introduced into the array through geometrical design turning the beam sideways. Laterally displaced photons are periodically returned to a central guide by photonic crystal action. The effect leads to a novel oscillatory optical phenomenon that can be magnetically controlled and rendered unidirectional. An on-chip optical isolator was designed based on the unidirectionality of the magneto-opticBloch oscillatory motion. The proposed device delivers an isolation ratio as high as 36 dB that remains above 30 dB in a 0.7 nm wavelength bandwidth, at the telecommunication wavelength 1.55 μm. Slight modifications in isolator design allow one to achieve an even more impressive isolation ratio ~ 55 dB, but at the expense of smaller bandwidth. Moreover, the device allows multifunctionality, such as optical switching with a simultaneous isolation function, well suited for photonic integrated circuits.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of the work described in this dissertation is the development of new wireless passive force monitoring platforms for applications in the medical field, specifically monitoring lower limb prosthetics. The developed sensors consist of stress sensitive, magnetically soft amorphous metallic glass materials. The first technology is based on magnetoelastic resonance. Specifically, when exposed to an AC excitation field along with a constant DC bias field, the magnetoelastic material mechanically vibrates, and may reaches resonance if the field frequency matches the mechanical resonant frequency of the material. The presented work illustrates that an applied loading pins portions of the strip, effectively decreasing the strip length, which results in an increase in the frequency of the resonance. The developed technology is deployed in a prototype lower limb prosthetic sleeve for monitoring forces experienced by the distal end of the residuum. This work also reports on the development of a magnetoharmonic force sensor comprised of the same material. According to the Villari effect, an applied loading to the material results in a change in the permeability of the magnetic sensor which is visualized as an increase in the higher-order harmonic fields of the material. Specifically, by applying a constant low frequency AC field and sweeping the applied DC biasing field, the higher-order harmonic components of the magnetic response can be visualized. This sensor technology was also instrumented onto a lower limb prosthetic for proof of deployment; however, the magnetoharmonic sensor illustrated complications with sensor positioning and a necessity to tailor the interface mechanics between the sensing material and the surface being monitored. The novelty of these two technologies is in their wireless passive nature which allows for long term monitoring over the life time of a given device. Additionally, the developed technologies are low cost. Recommendations for future works include improving the system for real-time monitoring, useful for data collection outside of a clinical setting.