971 resultados para Geometric Function Theory
Resumo:
2000 Mathematics Subject Classification: 94A12, 94A20, 30D20, 41A05.
Resumo:
The theory and experimental applications of optical Airy beams are in active development recently. The Airy beams are characterised by very special properties: they are non-diffractive and propagate along parabolic trajectories. Among the striking applications of the optical Airy beams are optical micro-manipulation implemented as the transport of small particles along the parabolic trajectory, Airy-Bessel linear light bullets, electron acceleration by the Airy beams, plasmonic energy routing. The detailed analysis of the mathematical aspects as well as physical interpretation of the electromagnetic Airy beams was done by considering the wave as a function of spatial coordinates only, related by the parabolic dependence between the transverse and the longitudinal coordinates. Their time dependence is assumed to be harmonic. Only a few papers consider a more general temporal dependence where such a relationship exists between the temporal and the spatial variables. This relationship is derived mostly by applying the Fourier transform to the expressions obtained for the harmonic time dependence or by a Fourier synthesis using the specific modulated spectrum near some central frequency. Spatial-temporal Airy pulses in the form of contour integrals is analysed near the caustic and the numerical solution of the nonlinear paraxial equation in time domain shows soliton shedding from the Airy pulse in Kerr medium. In this paper the explicitly time dependent solutions of the electromagnetic problem in the form of time-spatial pulses are derived in paraxial approximation through the Green's function for the paraxial equation. It is shown that a Gaussian and an Airy pulse can be obtained by applying the Green's function to a proper source current. We emphasize that the processes in time domain are directional, which leads to unexpected conclusions especially for the paraxial approximation.
Resumo:
Leadership is one of the most examined factors in relation to understanding employee wellbeing and performance. While there are disparate approaches to studying leadership, they share a common assumption that perceptions of a leader's behavior determine reactions to the leader. The concept of leadership perception is poorly understood in most theoretical approaches. To address this, we propose that there are many benefits from examining leadership perceptions as an attitude towards the leader. In this review, we show how research examining a number of aspects of attitudes (content, structure and function) can advance understanding of leadership perceptions and how these affect work-related outcomes. Such a perspective provides a more multi-faceted understanding of leadership perceptions than previously envisaged and this can provide a more detailed understanding of how such perceptions affect outcomes. In addition, we examine some of the main theoretical and methodological implications of viewing leadership perceptions as attitudes to the wider leadership area. The cross-fertilization of research from the attitudes literature to understanding leadership perceptions provides new insights into leadership processes and potential avenues for further research. (C) 2015 Elsevier Inc. All rights reserved
Resumo:
In this paper we summarize our recently proposed work on the information theory analysis of regenerative channels. We discuss how the design and the transfer function properties of the regenerator affect the noise statistics and enable Shannon capacities higher than that of the corresponding linear channels (in the absence of regeneration).
Resumo:
We present a review of the latest developments in one-dimensional (1D) optical wave turbulence (OWT). Based on an original experimental setup that allows for the implementation of 1D OWT, we are able to show that an inverse cascade occurs through the spontaneous evolution of the nonlinear field up to the point when modulational instability leads to soliton formation. After solitons are formed, further interaction of the solitons among themselves and with incoherent waves leads to a final condensate state dominated by a single strong soliton. Motivated by the observations, we develop a theoretical description, showing that the inverse cascade develops through six-wave interaction, and that this is the basic mechanism of nonlinear wave coupling for 1D OWT. We describe theory, numerics and experimental observations while trying to incorporate all the different aspects into a consistent context. The experimental system is described by two coupled nonlinear equations, which we explore within two wave limits allowing for the expression of the evolution of the complex amplitude in a single dynamical equation. The long-wave limit corresponds to waves with wave numbers smaller than the electrical coherence length of the liquid crystal, and the opposite limit, when wave numbers are larger. We show that both of these systems are of a dual cascade type, analogous to two-dimensional (2D) turbulence, which can be described by wave turbulence (WT) theory, and conclude that the cascades are induced by a six-wave resonant interaction process. WT theory predicts several stationary solutions (non-equilibrium and thermodynamic) to both the long- and short-wave systems, and we investigate the necessary conditions required for their realization. Interestingly, the long-wave system is close to the integrable 1D nonlinear Schrödinger equation (NLSE) (which contains exact nonlinear soliton solutions), and as a result during the inverse cascade, nonlinearity of the system at low wave numbers becomes strong. Subsequently, due to the focusing nature of the nonlinearity, this leads to modulational instability (MI) of the condensate and the formation of solitons. Finally, with the aid of the probability density function (PDF) description of WT theory, we explain the coexistence and mutual interactions between solitons and the weakly nonlinear random wave background in the form of a wave turbulence life cycle (WTLC).
Resumo:
A dolgozatban a döntéselméletben fontos szerepet játszó páros összehasonlítás mátrix prioritásvektorának meghatározására új megközelítést alkalmazunk. Az A páros összehasonlítás mátrix és a prioritásvektor által definiált B konzisztens mátrix közötti eltérést a Kullback-Leibler relatív entrópia-függvény segítségével mérjük. Ezen eltérés minimalizálása teljesen kitöltött mátrix esetében konvex programozási feladathoz vezet, nem teljesen kitöltött mátrix esetében pedig egy fixpont problémához. Az eltérésfüggvényt minimalizáló prioritásvektor egyben azzal a tulajdonsággal is rendelkezik, hogy az A mátrix elemeinek összege és a B mátrix elemeinek összege közötti különbség éppen az eltérésfüggvény minimumának az n-szerese, ahol n a feladat mérete. Így az eltérésfüggvény minimumának értéke két szempontból is lehet alkalmas az A mátrix inkonzisztenciájának a mérésére. _____ In this paper we apply a new approach for determining a priority vector for the pairwise comparison matrix which plays an important role in Decision Theory. The divergence between the pairwise comparison matrix A and the consistent matrix B defined by the priority vector is measured with the help of the Kullback-Leibler relative entropy function. The minimization of this divergence leads to a convex program in case of a complete matrix, leads to a fixed-point problem in case of an incomplete matrix. The priority vector minimizing the divergence also has the property that the difference of the sums of elements of the matrix A and the matrix B is n times the minimum of the divergence function where n is the dimension of the problem. Thus we developed two reasons for considering the value of the minimum of the divergence as a measure of inconsistency of the matrix A.
Resumo:
Extreme stock price movements are of great concern to both investors and the entire economy. For investors, a single negative return, or a combination of several smaller returns, can possible wipe out so much capital that the firm or portfolio becomes illiquid or insolvent. If enough investors experience this loss, it could shock the entire economy. An example of such a case is the stock market crash of 1987. Furthermore, there has been a lot of recent interest regarding the increasing volatility of stock prices. ^ This study presents an analysis of extreme stock price movements. The data utilized was the daily returns for the Standard and Poor's 500 index from January 3, 1978 to May 31, 2001. Research questions were analyzed using the statistical models provided by extreme value theory. One of the difficulties in examining stock price data is that there is no consensus regarding the correct shape of the distribution function generating the data. An advantage with extreme value theory is that no detailed knowledge of this distribution function is required to apply the asymptotic theory. We focus on the tail of the distribution. ^ Extreme value theory allows us to estimate a tail index, which we use to derive bounds on the returns for very low probabilities on an excess. Such information is useful in evaluating the volatility of stock prices. There are three possible limit laws for the maximum: Gumbel (thick-tailed), Fréchet (thin-tailed) or Weibull (no tail). Results indicated that extreme returns during the time period studied follow a Fréchet distribution. Thus, this study finds that extreme value analysis is a valuable tool for examining stock price movements and can be more efficient than the usual variance in measuring risk. ^
Resumo:
Current reform initiatives recommend that geometry instruction include the study of three-dimensional geometric objects and provide students with opportunities to use spatial skills in problem-solving tasks. Geometer's Sketchpad (GSP) is a dynamic and interactive computer program that enables the user to investigate and explore geometric concepts and manipulate geometric structures. Research using GSP as an instructional tool has focused primarily on teaching and learning two-dimensional geometry. This study explored the effect of a GSP based instructional environment on students' geometric thinking and three-dimensional spatial ability as they used GSP to learn three-dimensional geometry. For 10 weeks, 18 tenth-grade students from an urban school district used GSP to construct and analyze dynamic, two-dimensional representations of three-dimensional objects in a classroom environment that encouraged exploration, discussion, conjecture, and verification. The data were collected primarily from participant observations and clinical interviews and analyzed using qualitative methods of analysis. In addition, pretest and posttest measures of three-dimensional spatial ability and van Hiele level of geometric thinking were obtained. Spatial ability measures were analyzed using standard t-test analysis. ^ The data from this study indicate that GSP is a viable tool to teach students about three-dimensional geometric objects. A comparison of students' pretest and posttest van Hiele levels showed an improvement in geometric thinking, especially for students on lower levels of the van Hiele theory. Evidence at the p < .05 level indicated that students' spatial ability improved significantly. Specifically, the GSP dynamic, visual environment supported students' visualization and reasoning processes as students attempted to solve challenging tasks about three-dimensional geometric objects. The GSP instructional activities also provided students with an experiential base and an intuitive understanding about three-dimensional objects from which more formal work in geometry could be pursued. This study demonstrates that by designing appropriate GSP based instructional environments, it is possible to help students improve their spatial skills, develop more coherent and accurate intuitions about three-dimensional geometric objects, and progress through the levels of geometric thinking proposed by van Hiele. ^
Resumo:
The field of chemical kinetics is an exciting and active field. The prevailing theories make a number of simplifying assumptions that do not always hold in actual cases. Another current problem concerns a development of efficient numerical algorithms for solving the master equations that arise in the description of complex reactions. The objective of the present work is to furnish a completely general and exact theory of reaction rates, in a form reminiscent of transition state theory, valid for all fluid phases and also to develop a computer program that can solve complex reactions by finding the concentrations of all participating substances as a function of time. To do so, the full quantum scattering theory is used for deriving the exact rate law, and then the resulting cumulative reaction probability is put into several equivalent forms that take into account all relativistic effects if applicable, including one that is strongly reminiscent of transition state theory, but includes corrections from scattering theory. Then two programs, one for solving complex reactions, the other for solving first order linear kinetic master equations to solve them, have been developed and tested for simple applications.
Resumo:
This theory-based paper examines the definition of Executive Functioning (EF) skills, their importance in the early childhood classroom and how to aid in their natural development. The Word of Wisdom meditation technique is considered as a viable alternative to increase the natural development of EF skills in early childhood.
Resumo:
Date of Acceptance: 02/03/2015
Resumo:
Date of Acceptance: 02/03/2015
Resumo:
Experiments at Jefferson Lab have been conducted to extract the nucleon spin-dependent structure functions over a wide kinematic range. Higher moments of these quantities provide tests of QCD sum rules and predictions of chiral perturbation theory ($\chi$PT). While precise measurements of $g_{1}^n$, $g_{2}^n$, and $g_1^p$ have been extensively performed, the data of $g_2^p$ remain scarce. Discrepancies were found between existing data related to $g_2$ and theoretical predictions. Results on the proton at large $Q^2$ show a significant deviation from the Burkhardt-Cottingham sum rule, while results for the neutron generally follow this sum rule. The next-to-leading order $\chi$PT calculations exhibit discrepancy with data on the longitudinal-transverse polarizability $\delta_{LT}^n$. Further measurements of the proton spin structure function $g_2^p$ are desired to understand these discrepancies.
Experiment E08-027 (g2p) was conducted at Jefferson Lab in experimental Hall A in 2012. Inclusive measurements were performed with polarized electron beam and a polarized ammonia target to obtain the proton spin-dependent structure function $g_2^p$ at low Q$^2$ region (0.02$<$Q$^2$$<$0.2 GeV$^2$) for the first time. The results can be used to test the Burkhardt-Cottingham sum rule, and also allow us to extract the longitudinal-transverse spin polarizability of the proton, which will provide a benchmark test of $\chi$PT calculations. This thesis will present and discuss the very preliminary results of the transverse asymmetry and the spin-dependent structure functions $g_1^p$ and $g_2^p$ from the data analysis of the g2p experiment .
Resumo:
This paper presents an economic model of the effects of identity and social norms on consumption patterns. By incorporating qualitative studies in psychology and sociology, I propose a utility function that features two components – economic (functional) and identity elements. This setup is extended to analyze a market comprising a continuum of consumers, whose identity distribution along a spectrum of binary identities is described by a Beta distribution. I also introduce the notion of salience in the context of identity and consumption decisions. The key result of the model suggests that fundamental economic parameters, such as price elasticity and market demand, can be altered by identity elements. In addition, it predicts that firms in perfectly competitive markets may associate their products with certain types of identities, in order to reduce product substitutability and attain price-setting power.
Resumo:
As the world population continues to grow past seven billion people and global challenges continue to persist including resource availability, biodiversity loss, climate change and human well-being, a new science is required that can address the integrated nature of these challenges and the multiple scales on which they are manifest. Sustainability science has emerged to fill this role. In the fifteen years since it was first called for in the pages of Science, it has rapidly matured, however its place in the history of science and the way it is practiced today must be continually evaluated. In Part I, two chapters address this theoretical and practical grounding. Part II transitions to the applied practice of sustainability science in addressing the urban heat island (UHI) challenge wherein the climate of urban areas are warmer than their surrounding rural environs. The UHI has become increasingly important within the study of earth sciences given the increased focus on climate change and as the balance of humans now live in urban areas.
In Chapter 2 a novel contribution to the historical context of sustainability is argued. Sustainability as a concept characterizing the relationship between humans and nature emerged in the mid to late 20th century as a response to findings used to also characterize the Anthropocene. Emerging from the human-nature relationships that came before it, evidence is provided that suggests Sustainability was enabled by technology and a reorientation of world-view and is unique in its global boundary, systematic approach and ambition for both well being and the continued availability of resources and Earth system function. Sustainability is further an ambition that has wide appeal, making it one of the first normative concepts of the Anthropocene.
Despite its widespread emergence and adoption, sustainability science continues to suffer from definitional ambiguity within the academe. In Chapter 3, a review of efforts to provide direction and structure to the science reveals a continuum of approaches anchored at either end by differing visions of how the science interfaces with practice (solutions). At one end, basic science of societally defined problems informs decisions about possible solutions and their application. At the other end, applied research directly affects the options available to decision makers. While clear from the literature, survey data further suggests that the dichotomy does not appear to be as apparent in the minds of practitioners.
In Chapter 4, the UHI is first addressed at the synoptic, mesoscale. Urban climate is the most immediate manifestation of the warming global climate for the majority of people on earth. Nearly half of those people live in small to medium sized cities, an understudied scale in urban climate research. Widespread characterization would be useful to decision makers in planning and design. Using a multi-method approach, the mesoscale UHI in the study region is characterized and the secular trend over the last sixty years evaluated. Under isolated ideal conditions the findings indicate a UHI of 5.3 ± 0.97 °C to be present in the study area, the magnitude of which is growing over time.
Although urban heat islands (UHI) are well studied, there remain no panaceas for local scale mitigation and adaptation methods, therefore continued attention to characterization of the phenomenon in urban centers of different scales around the globe is required. In Chapter 5, a local scale analysis of the canopy layer and surface UHI in a medium sized city in North Carolina, USA is conducted using multiple methods including stationary urban sensors, mobile transects and remote sensing. Focusing on the ideal conditions for UHI development during an anticyclonic summer heat event, the study observes a range of UHI intensity depending on the method of observation: 8.7 °C from the stationary urban sensors; 6.9 °C from mobile transects; and, 2.2 °C from remote sensing. Additional attention is paid to the diurnal dynamics of the UHI and its correlation with vegetation indices, dewpoint and albedo. Evapotranspiration is shown to drive dynamics in the study region.
Finally, recognizing that a bridge must be established between the physical science community studying the Urban Heat Island (UHI) effect, and the planning community and decision makers implementing urban form and development policies, Chapter 6 evaluates multiple urban form characterization methods. Methods evaluated include local climate zones (LCZ), national land cover database (NCLD) classes and urban cluster analysis (UCA) to determine their utility in describing the distribution of the UHI based on three standard observation types 1) fixed urban temperature sensors, 2) mobile transects and, 3) remote sensing. Bivariate, regression and ANOVA tests are used to conduct the analyses. Findings indicate that the NLCD classes are best correlated to the UHI intensity and distribution in the study area. Further, while the UCA method is not useful directly, the variables included in the method are predictive based on regression analysis so the potential for better model design exists. Land cover variables including albedo, impervious surface fraction and pervious surface fraction are found to dominate the distribution of the UHI in the study area regardless of observation method.
Chapter 7 provides a summary of findings, and offers a brief analysis of their implications for both the scientific discourse generally, and the study area specifically. In general, the work undertaken does not achieve the full ambition of sustainability science, additional work is required to translate findings to practice and more fully evaluate adoption. The implications for planning and development in the local region are addressed in the context of a major light-rail infrastructure project including several systems level considerations like human health and development. Finally, several avenues for future work are outlined. Within the theoretical development of sustainability science, these pathways include more robust evaluations of the theoretical and actual practice. Within the UHI context, these include development of an integrated urban form characterization model, application of study methodology in other geographic areas and at different scales, and use of novel experimental methods including distributed sensor networks and citizen science.