139 resultados para Order of Convergence


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The debate associated with the qualifications of business school faculty has raged since the 1959 release of the Gordon–Howell and Pierson reports, which encouraged business schools in the USA to enhance their legitimacy by increasing their faculties’ doctoral qualifications and scholarly rigor. Today, the legitimacy of specific faculty qualifications remains one of the most discussed topics in management education, attracting the interest of administrators, faculty, and accreditation agencies. Based on new institutional theory and the institutional logics perspective, this paper examines convergence and innovation in business schools through an analysis of faculty hiring criteria. The qualifications examined are academic degree, scholarly publications, teaching experience, and professional experience. Three groups of schools are examined based on type of university, position within a media ranking system, and accreditation by the Association to Advance Collegiate Schools of Business. Data are gathered using a content analysis of 441 faculty postings from business schools based in the USA over two time periods. Contrary to claims of global convergence, we find most qualifications still vary by group, even in the mature US market. Moreover, innovative hiring is more likely to be found in non-elite schools.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We know that from mid-childhood onwards most new words are learned implicitly via reading; however, most word learning studies have taught novel items explicitly. We examined incidental word learning during reading by focusing on the well-documented finding that words which are acquired early in life are processed more quickly than those acquired later. Novel words were embedded in meaningful sentences and were presented to adult readers early (day 1) or later (day 2) during a five-day exposure phase. At test adults read the novel words in semantically neutral sentences. Participants’ eye movements were monitored throughout exposure and test. Adults also completed a surprise memory test in which they had to match each novel word with its definition. Results showed a decrease in reading times for all novel words over exposure, and significantly longer total reading times at test for early than late novel words. Early-presented novel words were also remembered better in the offline test. Our results show that order of presentation influences processing time early in the course of acquiring a new word, consistent with partial and incremental growth in knowledge occurring as a function of an individual’s experience with each word.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new set of subjective age-of-acquisition (AoA) ratings for 299 words (158 nouns, 141 verbs) in 25 languages from five language families (Afro-Asiatic: Semitic languages; Altaic: one Turkic language: Indo-European: Baltic, Celtic, Germanic, Hellenic, Slavic, and Romance languages; Niger-Congo: one Bantu language; Uralic: Finnic and Ugric languages). Adult native speakers reported the age at which they had learned each word. We present a comparison of the AoA ratings across all languages by contrasting them in pairs. This comparison shows a consistency in the orders of ratings across the 25 languages. The data were then analyzed (1) to ascertain how the demographic characteristics of the participants influenced AoA estimations and (2) to assess differences caused by the exact form of the target question (when did you learn vs. when do children learn this word); (3) to compare the ratings obtained in our study to those of previous studies; and (4) to assess the validity of our study by comparison with quasi-objective AoA norms derived from the MacArthur–Bates Communicative Development Inventories (MB-CDI). All 299 words were judged as being acquired early (mostly before the age of 6 years). AoA ratings were associated with the raters’ social or language status, but not with the raters’ age or education. Parents reported words as being learned earlier, and bilinguals reported learning them later. Estimations of the age at which children learn the words revealed significantly lower ratings of AoA. Finally, comparisons with previous AoA and MB-CDI norms support the validity of the present estimations. Our AoA ratings are available for research or other purposes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Different optimization methods can be employed to optimize a numerical estimate for the match between an instantiated object model and an image. In order to take advantage of gradient-based optimization methods, perspective inversion must be used in this context. We show that convergence can be very fast by extrapolating to maximum goodness-of-fit with Newton's method. This approach is related to methods which either maximize a similar goodness-of-fit measure without use of gradient information, or else minimize distances between projected model lines and image features. Newton's method combines the accuracy of the former approach with the speed of convergence of the latter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article, we use the no-response test idea, introduced in Luke and Potthast (2003) and Potthast (Preprint) and the inverse obstacle problem, to identify the interface of the discontinuity of the coefficient gamma of the equation del (.) gamma(x)del + c(x) with piecewise regular gamma and bounded function c(x). We use infinitely many Cauchy data as measurement and give a reconstructive method to localize the interface. We will base this multiwave version of the no-response test on two different proofs. The first one contains a pointwise estimate as used by the singular sources method. The second one is built on an energy (or an integral) estimate which is the basis of the probe method. As a conclusion of this, the probe and the singular sources methods are equivalent regarding their convergence and the no-response test can be seen as a unified framework for these methods. As a further contribution, we provide a formula to reconstruct the values of the jump of gamma(x), x is an element of partial derivative D at the boundary. A second consequence of this formula is that the blow-up rate of the indicator functions of the probe and singular sources methods at the interface is given by the order of the singularity of the fundamental solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is addressed to the numerical solving of the rendering equation in realistic image creation. The rendering equation is integral equation describing the light propagation in a scene accordingly to a given illumination model. The used illumination model determines the kernel of the equation under consideration. Nowadays, widely used are the Monte Carlo methods for solving the rendering equation in order to create photorealistic images. In this work we consider the Monte Carlo solving of the rendering equation in the context of the parallel sampling scheme for hemisphere. Our aim is to apply this sampling scheme to stratified Monte Carlo integration method for parallel solving of the rendering equation. The domain for integration of the rendering equation is a hemisphere. We divide the hemispherical domain into a number of equal sub-domains of orthogonal spherical triangles. This domain partitioning allows to solve the rendering equation in parallel. It is known that the Neumann series represent the solution of the integral equation as a infinity sum of integrals. We approximate this sum with a desired truncation error (systematic error) receiving the fixed number of iteration. Then the rendering equation is solved iteratively using Monte Carlo approach. At each iteration we solve multi-dimensional integrals using uniform hemisphere partitioning scheme. An estimate of the rate of convergence is obtained using the stratified Monte Carlo method. This domain partitioning allows easy parallel realization and leads to convergence improvement of the Monte Carlo method. The high performance and Grid computing of the corresponding Monte Carlo scheme are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Simultaneous observations of cloud microphysical properties were obtained by in-situ aircraft measurements and ground based Radar/Lidar. Widespread mid-level stratus cloud was present below a temperature inversion (~5 °C magnitude) at 3.6 km altitude. Localised convection (peak updraft 1.5 m s−1) was observed 20 km west of the Radar station. This was associated with convergence at 2.5 km altitude. The convection was unable to penetrate the inversion capping the mid-level stratus. The mid-level stratus cloud was vertically thin (~400 m), horizontally extensive (covering 100 s of km) and persisted for more than 24 h. The cloud consisted of supercooled water droplets and small concentrations of large (~1 mm) stellar/plate like ice which slowly precipitated out. This ice was nucleated at temperatures greater than −12.2 °C and less than −10.0 °C, (cloud top and cloud base temperatures, respectively). No ice seeding from above the cloud layer was observed. This ice was formed by primary nucleation, either through the entrainment of efficient ice nuclei from above/below cloud, or by the slow stochastic activation of immersion freezing ice nuclei contained within the supercooled drops. Above cloud top significant concentrations of sub-micron aerosol were observed and consisted of a mixture of sulphate and carbonaceous material, a potential source of ice nuclei. Particle number concentrations (in the size range 0.1of ~25 cm−3. Ice crystal concentrations in the cloud were constant at around 0.2 L−1. It is estimated that entrainment of aerosol particles into cloud cannot replenish the loss of ice nuclei from the cloud layer via precipitation. Precipitation from the mid-level stratus evaporated before reaching the surface, whereas rates of up to 1 mm h−1 were observed below the convective feature. There is strong evidence for the Hallett-Mossop (HM) process of secondary ice particle production leading to the formation of the precipitation observed. This includes (1) Ice concentrations in the convective feature were more than an order of magnitude greater than the concentration of primary ice in the overlaying stratus, (2) Large concentrations of small pristine columns were observed at the ~−5 °C level together with liquid water droplets and a few rimed ice particles, (3) Columns were larger and increasingly rimed at colder temperatures. Calculated ice splinter production rates are consistent with observed concentrations if the condition that only droplets greater than 24 μm are capable of generating secondary ice splinters is relaxed. This case demonstrates the importance of understanding the formation of ice at slightly supercooled temperatures, as it can lead to secondary ice production and the formation of precipitation in clouds which may not otherwise be considered as significant precipitation sources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The UPSCALE (UK on PRACE: weather-resolving Simulations of Climate for globAL Environmental risk) project, using PRACE (Partnership for Advanced Computing in Europe) resources, constructed and ran an ensemble of atmosphere-only global climate model simulations, using the Met Office Unified Model GA3 configuration. Each simulation is 27 years in length for both the present climate and an end-of-century future climate, at resolutions of N96 (130 km), N216 (60 km) and N512 (25 km), in order to study the impact of model resolution on high impact climate features such as tropical cyclones. Increased model resolution is found to improve the simulated frequency of explicitly tracked tropical cyclones, and correlations of interannual variability in the North Atlantic and North West Pacific lie between 0.6 and 0.75. Improvements in the deficit of genesis in the eastern North Atlantic as resolution increases appear to be related to the representation of African Easterly Waves and the African Easterly Jet. However, the intensity of the modelled tropical cyclones as measured by 10 m wind speed remain weak, and there is no indication of convergence over this range of resolutions. In the future climate ensemble, there is a reduction of 50% in the frequency of Southern Hemisphere tropical cyclones, while in the Northern Hemisphere there is a reduction in the North Atlantic, and a shift in the Pacific with peak intensities becoming more common in the Central Pacific. There is also a change in tropical cyclone intensities, with the future climate having fewer weak storms and proportionally more stronger storms

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Gauss–Newton algorithm is an iterative method regularly used for solving nonlinear least squares problems. It is particularly well suited to the treatment of very large scale variational data assimilation problems that arise in atmosphere and ocean forecasting. The procedure consists of a sequence of linear least squares approximations to the nonlinear problem, each of which is solved by an “inner” direct or iterative process. In comparison with Newton’s method and its variants, the algorithm is attractive because it does not require the evaluation of second-order derivatives in the Hessian of the objective function. In practice the exact Gauss–Newton method is too expensive to apply operationally in meteorological forecasting, and various approximations are made in order to reduce computational costs and to solve the problems in real time. Here we investigate the effects on the convergence of the Gauss–Newton method of two types of approximation used commonly in data assimilation. First, we examine “truncated” Gauss–Newton methods where the inner linear least squares problem is not solved exactly, and second, we examine “perturbed” Gauss–Newton methods where the true linearized inner problem is approximated by a simplified, or perturbed, linear least squares problem. We give conditions ensuring that the truncated and perturbed Gauss–Newton methods converge and also derive rates of convergence for the iterations. The results are illustrated by a simple numerical example. A practical application to the problem of data assimilation in a typical meteorological system is presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Under anthropogenic climate change it is possible that the increased radiative forcing and associated changes in mean climate may affect the “dynamical equilibrium” of the climate system; leading to a change in the relative dominance of different modes of natural variability, the characteristics of their patterns or their behavior in the time domain. Here we use multi-century integrations of version three of the Hadley Centre atmosphere model coupled to a mixed layer ocean to examine potential changes in atmosphere-surface ocean modes of variability. After first evaluating the simulated modes of Northern Hemisphere winter surface temperature and geopotential height against observations, we examine their behavior under an idealized equilibrium doubling of atmospheric CO2. We find no significant changes in the order of dominance, the spatial patterns or the associated time series of the modes. Having established that the dynamic equilibrium is preserved in the model on doubling of CO2, we go on to examine the temperature pattern of mean climate change in terms of the modes of variability; the motivation being that the pattern of change might be explicable in terms of changes in the amount of time the system resides in a particular mode. In addition, if the two are closely related, we might be able to assess the relative credibility of different spatial patterns of climate change from different models (or model versions) by assessing their representation of variability. Significant shifts do appear to occur in the mean position of residence when examining a truncated set of the leading order modes. However, on examining the complete spectrum of modes, it is found that the mean climate change pattern is close to orthogonal to all of the modes and the large shifts are a manifestation of this orthogonality. The results suggest that care should be exercised in using a truncated set of variability EOFs to evaluate climate change signals.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The performance of a 2D numerical model of flood hydraulics is tested for a major event in Carlisle, UK, in 2005. This event is associated with a unique data set, with GPS surveyed wrack lines and flood extent surveyed 3 weeks after the flood. The Simple Finite Volume (SFV) model is used to solve the 2D Saint-Venant equations over an unstructured mesh of 30000 elements representing channel and floodplain, and allowing detailed hydraulics of flow around bridge piers and other influential features to be represented. The SFV model is also used to corroborate flows recorded for the event at two gauging stations. Calibration of Manning's n is performed with a two stage strategy, with channel values determined by calibration of the gauging station models, and floodplain values determined by optimising the fit between model results and observed water levels and flood extent for the 2005 event. RMS error for the calibrated model compared with surveyed water levels is ~±0.4m, the same order of magnitude as the estimated error in the survey data. The study demonstrates the ability of unstructured mesh hydraulic models to represent important hydraulic processes across a range of scales, with potential applications to flood risk management.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

SMPS and DMS500 analysers were used to measure particulate size distributions in the exhaust of a fully annular aero gas turbine engine at two operating conditions to compare and analyse sources of discrepancy. A number of different dilution ratio values were utilised for the comparative analysis, and a Dekati hot diluter operating at a temperature of 623°K was also utilised to remove volatile PM prior to measurements being made. Additional work focused on observing the effect of varying the sample line temperatures to ascertain the impact. Explanations are offered for most of the trends observed, although a new, repeatable event identified in the range from 417°K to 423°K – where there was a three order of magnitude increase in the nucleation mode of the sample – requires further study.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Disequilibria between Pb-210 and Ra-226 can be used to trace magma degassing, because the intermediate nuclides, particularly Rn-222, are volatile. Products of the 1980-1986 eruptions of Mount St. Helens have been analysed for (Pb-210/Ra-226). Both excesses and deficits of Pb-210 are encountered suggesting rapid gas transfer. The time scale of diffuse, non-eruptive gas escape prior to 1980 as documented by Pb-210 deficits is on the order of a decade using the model developed by Gauthier and Condomines (Earth Planet. Sci. Lett. 172 (1999) 111-126) for a non-renewed magma chamber and efficient Rn removal. The time required to build-up Pb-210 excess is much shorter (months) as can be observed from steady increases of (Pb-210/Ra-226) with time during 1980-1982. The formation of Pb-210 excess requires both rapid gas transport through the magma and periodic blocking of gas escape routes. Superposed on this time trend is the natural variability of (Pb-210/Ra-226) in a single eruption caused by tapping magma from various depths. The two time scales of gas transport, to create both Pb-210 deficits and Pb-210 excesses, cannot be reconciled in a single event. Rather Pb-210 deficits are associated with pre-eruptive diffuse degassing, while Pb-210 excesses document the more vigorous degassing associated with eruption and recharge of the system. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Lime treatment of hydrocarbon-contaminated soils offers the potential to stabilize and solidify these materials, with a consequent reduction in the risks associated with the leachate emanating from them. This can aid the disposal of contaminated soils or enable their on-site treatment. In this study, the addition of hydrated lime and quicklime significantly reduced the leaching of total petroleum hydrocarbons (TPH) from soils polluted with a 50:50 petrol/diesel mixture. Treatment with quicklime was slightly more effective, but hydrated lime may be better in the field because of its ease of handling. It is proposed that this occurs as a consequence of pozzolanic reactions retaining the hydrocarbons within the soil matrix. There was some evidence that this may be a temporary effect, as leaching increased between seven and 21 days after treatment, but the TPH concentrations in the leachate of treated soils were still one order of magnitude below those of the control soil, offering significant protection to groundwater. The reduction in leaching following treatment was observed in both aliphatic and aromatic fractions, but the latter were more affected because of their higher solubilty. The results are discussed in the context of risk assessment, and recommendations for future research are made.