856 resultados para Error-location numbers
Resumo:
Philosophy has repeatedly denied cinema in order to grant it artistic status. Adorno, for example, defined an ‘uncinematic’ element in the negation of movement in modern cinema, ‘which constitutes its artistic character’. Similarly, Lyotard defended an ‘acinema’, which rather than selecting and excluding movements through editing, accepts what is ‘fortuitous, dirty, confused, unclear, poorly framed, overexposed’. In his Handbook of Inaesthetics, Badiou embraces a similar idea, by describing cinema as an ‘impure circulation’ that incorporates the other arts. Resonating with Bazin and his defence of ‘impure cinema’, that is, of cinema’s interbreeding with other arts, Badiou seems to agree with him also in identifying the uncinematic as the location of the Real. This article will investigate the particular impurities of cinema that drive it beyond the specificities of the medium and into the realm of the other arts and the reality of life itself. Privileged examples will be drawn from various moments in film history and geography, starting with the analysis of two films by Jafar Panahi: This Is Not a Film (In film nist, 2011), whose anti-cinema stance in announced in its own title; and The Mirror (Aineh, 1997), another relentless exercise in self-negation. It goes on to examine Kenji Mizoguchi’s deconstruction of cinematic acting in his exploration of the geidomono genre (films about theatre actors) in The Story of the Last Chrysanthemums (Zangigku monogatari, 1939), and culminates in the conjuring of the physical experience of death through the systematic demolition of film genres in The Act of Killing (Joshua Oppenheimer et al., 2012).
Resumo:
This study has explored the prediction errors of tropical cyclones (TCs) in the European Centre for Medium-Range Weather Forecasts (ECMWF) Ensemble Prediction System (EPS) for the Northern Hemisphere summer period for five recent years. Results for the EPS are contrasted with those for the higher-resolution deterministic forecasts. Various metrics of location and intensity errors are considered and contrasted for verification based on IBTrACS and the numerical weather prediction (NWP) analysis (NWPa). Motivated by the aim of exploring extended TC life cycles, location and intensity measures are introduced based on lower-tropospheric vorticity, which is contrasted with traditional verification metrics. Results show that location errors are almost identical when verified against IBTrACS or the NWPa. However, intensity in the form of the mean sea level pressure (MSLP) minima and 10-m wind speed maxima is significantly underpredicted relative to IBTrACS. Using the NWPa for verification results in much better consistency between the different intensity error metrics and indicates that the lower-tropospheric vorticity provides a good indication of vortex strength, with error results showing similar relationships to those based on MSLP and 10-m wind speeds for the different forecast types. The interannual variation in forecast errors are discussed in relation to changes in the forecast and NWPa system and variations in forecast errors between different ocean basins are discussed in terms of the propagation characteristics of the TCs.
Resumo:
Although medieval rentals have been extensively studied, few scholars have used them to analyse variations in the rents paid on individual properties within a town. It has been claimed that medieval rents did not reflect economic values or market forces, but were set according to social and political rather than economic criteria, and remained ossified at customary levels. This paper uses hedonic regression methods to test whether property rents in medieval Gloucester were influenced by classic economic factors such as the location and use of a property. It investigates both rents and local rates (landgavel), and explores the relationship between the two. It also examines spatial autocorrelation. It finds significant relationships between urban rents and property characteristics that are similar to those found in modern studies. The findings are consistent with the view that, in Gloucester at least, medieval rents were strongly influenced by classical economic factors working through a competitive urban property market.
Resumo:
Although the sunspot-number series have existed since the mid-19th century, they are still the subject of intense debate, with the largest uncertainty being related to the "calibration" of the visual acuity of individual observers in the past. Daisy-chain regression methods are applied to inter-calibrate the observers which may lead to significant bias and error accumulation. Here we present a novel method to calibrate the visual acuity of the key observers to the reference data set of Royal Greenwich Observatory sunspot groups for the period 1900-1976, using the statistics of the active-day fraction. For each observer we independently evaluate their observational thresholds [S_S] defined such that the observer is assumed to miss all of the groups with an area smaller than S_S and report all the groups larger than S_S. Next, using a Monte-Carlo method we construct, from the reference data set, a correction matrix for each observer. The correction matrices are significantly non-linear and cannot be approximated by a linear regression or proportionality. We emphasize that corrections based on a linear proportionality between annually averaged data lead to serious biases and distortions of the data. The correction matrices are applied to the original sunspot group records for each day, and finally the composite corrected series is produced for the period since 1748. The corrected series displays secular minima around 1800 (Dalton minimum) and 1900 (Gleissberg minimum), as well as the Modern grand maximum of activity in the second half of the 20th century. The uniqueness of the grand maximum is confirmed for the last 250 years. It is shown that the adoption of a linear relationship between the data of Wolf and Wolfer results in grossly inflated group numbers in the 18th and 19th centuries in some reconstructions.
Resumo:
A smoother introduced earlier by van Leeuwen and Evensen is applied to a problem in which real obser vations are used in an area with strongly nonlinear dynamics. The derivation is new , but it resembles an earlier derivation by van Leeuwen and Evensen. Again a Bayesian view is taken in which the prior probability density of the model and the probability density of the obser vations are combined to for m a posterior density . The mean and the covariance of this density give the variance-minimizing model evolution and its errors. The assumption is made that the prior probability density is a Gaussian, leading to a linear update equation. Critical evaluation shows when the assumption is justified. This also sheds light on why Kalman filters, in which the same ap- proximation is made, work for nonlinear models. By reference to the derivation, the impact of model and obser vational biases on the equations is discussed, and it is shown that Bayes’ s for mulation can still be used. A practical advantage of the ensemble smoother is that no adjoint equations have to be integrated and that error estimates are easily obtained. The present application shows that for process studies a smoother will give superior results compared to a filter , not only owing to the smooth transitions at obser vation points, but also because the origin of features can be followed back in time. Also its preference over a strong-constraint method is highlighted. Further more, it is argued that the proposed smoother is more efficient than gradient descent methods or than the representer method when error estimates are taken into account
Resumo:
With the development of convection-permitting numerical weather prediction the efficient use of high resolution observations in data assimilation is becoming increasingly important. The operational assimilation of these observations, such as Dopplerradar radial winds, is now common, though to avoid violating the assumption of un- correlated observation errors the observation density is severely reduced. To improve the quantity of observations used and the impact that they have on the forecast will require the introduction of the full, potentially correlated, error statistics. In this work, observation error statistics are calculated for the Doppler radar radial winds that are assimilated into the Met Office high resolution UK model using a diagnostic that makes use of statistical averages of observation-minus-background and observation-minus-analysis residuals. This is the first in-depth study using the diagnostic to estimate both horizontal and along-beam correlated observation errors. By considering the new results obtained it is found that the Doppler radar radial wind error standard deviations are similar to those used operationally and increase as the observation height increases. Surprisingly the estimated observation error correlation length scales are longer than the operational thinning distance. They are dependent on both the height of the observation and on the distance of the observation away from the radar. Further tests show that the long correlations cannot be attributed to the use of superobservations or the background error covariance matrix used in the assimilation. The large horizontal correlation length scales are, however, in part, a result of using a simplified observation operator.
Resumo:
We analyse the ability of CMIP3 and CMIP5 coupled ocean–atmosphere general circulation models (CGCMs) to simulate the tropical Pacific mean state and El Niño-Southern Oscillation (ENSO). The CMIP5 multi-model ensemble displays an encouraging 30 % reduction of the pervasive cold bias in the western Pacific, but no quantum leap in ENSO performance compared to CMIP3. CMIP3 and CMIP5 can thus be considered as one large ensemble (CMIP3 + CMIP5) for multi-model ENSO analysis. The too large diversity in CMIP3 ENSO amplitude is however reduced by a factor of two in CMIP5 and the ENSO life cycle (location of surface temperature anomalies, seasonal phase locking) is modestly improved. Other fundamental ENSO characteristics such as central Pacific precipitation anomalies however remain poorly represented. The sea surface temperature (SST)-latent heat flux feedback is slightly improved in the CMIP5 ensemble but the wind-SST feedback is still underestimated by 20–50 % and the shortwave-SST feedbacks remain underestimated by a factor of two. The improvement in ENSO amplitudes might therefore result from error compensations. The ability of CMIP models to simulate the SST-shortwave feedback, a major source of erroneous ENSO in CGCMs, is further detailed. In observations, this feedback is strongly nonlinear because the real atmosphere switches from subsident (positive feedback) to convective (negative feedback) regimes under the effect of seasonal and interannual variations. Only one-third of CMIP3 + CMIP5 models reproduce this regime shift, with the other models remaining locked in one of the two regimes. The modelled shortwave feedback nonlinearity increases with ENSO amplitude and the amplitude of this feedback in the spring strongly relates with the models ability to simulate ENSO phase locking. In a final stage, a subset of metrics is proposed in order to synthesize the ability of each CMIP3 and CMIP5 models to simulate ENSO main characteristics and key atmospheric feedbacks.
Resumo:
This paper assesses the impact of the location and configuration of Battery Energy Storage Systems (BESS) on Low-Voltage (LV) feeders. BESS are now being deployed on LV networks by Distribution Network Operators (DNOs) as an alternative to conventional reinforcement (e.g. upgrading cables and transformers) in response to increased electricity demand from new technologies such as electric vehicles. By storing energy during periods of low demand and then releasing that energy at times of high demand, the peak demand of a given LV substation on the grid can be reduced therefore mitigating or at least delaying the need for replacement and upgrade. However, existing research into this application of BESS tends to evaluate the aggregated impact of such systems at the substation level and does not systematically consider the impact of the location and configuration of BESS on the voltage profiles, losses and utilisation within a given feeder. In this paper, four configurations of BESS are considered: single-phase, unlinked three-phase, linked three-phase without storage for phase-balancing only, and linked three-phase with storage. These four configurations are then assessed based on models of two real LV networks. In each case, the impact of the BESS is systematically evaluated at every node in the LV network using Matlab linked with OpenDSS. The location and configuration of a BESS is shown to be critical when seeking the best overall network impact or when considering specific impacts on voltage, losses, or utilisation separately. Furthermore, the paper also demonstrates that phase-balancing without energy storage can provide much of the gains on unbalanced networks compared to systems with energy storage.
Resumo:
In recent years an increasing number of papers have employed meta-analysis to integrate effect sizes of researchers’ own series of studies within a single paper (“internal meta-analysis”). Although this approach has the obvious advantage of obtaining narrower confidence intervals, we show that it could inadvertently inflate false-positive rates if researchers are motivated to use internal meta-analysis in order to obtain a significant overall effect. Specifically, if one decides whether to stop or continue a further replication experiment depending on the significance of the results in an internal meta-analysis, false-positive rates would increase beyond the nominal level. We conducted a set of Monte-Carlo simulations to demonstrate our argument, and provided a literature review to gauge awareness and prevalence of this issue. Furthermore, we made several recommendations when using internal meta-analysis to make a judgment on statistical significance.
Resumo:
More than 70 years ago it was recognised that ionospheric F2-layer critical frequencies [foF2] had a strong relationship to sunspot number. Using historic datasets from the Slough and Washington ionosondes, we evaluate the best statistical fits of foF2 to sunspot numbers (at each Universal Time [UT] separately) in order to search for drifts and abrupt changes in the fit residuals over Solar Cycles 17-21. This test is carried out for the original composite of the Wolf/Zürich/International sunspot number [R], the new “backbone” group sunspot number [RBB] and the proposed “corrected sunspot number” [RC]. Polynomial fits are made both with and without allowance for the white-light facular area, which has been reported as being associated with cycle-to-cycle changes in the sunspot number - foF2 relationship. Over the interval studied here, R, RBB, and RC largely differ in their allowance for the “Waldmeier discontinuity” around 1945 (the correction factor for which for R, RBB and RC is, respectively, zero, effectively over 20 %, and explicitly 11.6 %). It is shown that for Solar Cycles 18-21, all three sunspot data sequences perform well, but that the fit residuals are lowest and most uniform for RBB. We here use foF2 for those UTs for which R, RBB, and RC all give correlations exceeding 0.99 for intervals both before and after the Waldmeier discontinuity. The error introduced by the Waldmeier discontinuity causes R to underestimate the fitted values based on the foF2 data for 1932-1945 but RBB overestimates them by almost the same factor, implying that the correction for the Waldmeier discontinuity inherent in RBB is too large by a factor of two. Fit residuals are smallest and most uniform for RC and the ionospheric data support the optimum discontinuity multiplicative correction factor derived from the independent Royal Greenwich Observatory (RGO) sunspot group data for the same interval.
Resumo:
The co-polar correlation coefficient (ρhv) has many applications, including hydrometeor classification, ground clutter and melting layer identification, interpretation of ice microphysics and the retrieval of rain drop size distributions (DSDs). However, we currently lack the quantitative error estimates that are necessary if these applications are to be fully exploited. Previous error estimates of ρhv rely on knowledge of the unknown "true" ρhv and implicitly assume a Gaussian probability distribution function of ρhv samples. We show that frequency distributions of ρhv estimates are in fact highly negatively skewed. A new variable: L = -log10(1 - ρhv) is defined, which does have Gaussian error statistics, and a standard deviation depending only on the number of independent radar pulses. This is verified using observations of spherical drizzle drops, allowing, for the first time, the construction of rigorous confidence intervals in estimates of ρhv. In addition, we demonstrate how the imperfect co-location of the horizontal and vertical polarisation sample volumes may be accounted for. The possibility of using L to estimate the dispersion parameter (µ) in the gamma drop size distribution is investigated. We find that including drop oscillations is essential for this application, otherwise there could be biases in retrieved µ of up to ~8. Preliminary results in rainfall are presented. In a convective rain case study, our estimates show µ to be substantially larger than 0 (an exponential DSD). In this particular rain event, rain rate would be overestimated by up to 50% if a simple exponential DSD is assumed.
Resumo:
In the event of a volcanic eruption the decision to close airspace is based on forecast ash maps, produced using volcanic ash transport and dispersion models. In this paper we quantitatively evaluate the spatial skill of volcanic ash simulations using satellite retrievals of ash from the Eyja allajökull eruption during the period from 7 to 16 May 2010. We find that at the start of this period, 7–10 May, the model (FLEXible PARTicle) has excellent skill and can predict the spatial distribution of the satellite-retrieved ash to within 0.5∘ × 0.5∘ latitude/longitude. However, on 10 May there is a decrease in the spatial accuracy of the model to 2.5∘× 2.5∘ latitude/longitude, and between 11 and 12 May the simulated ash location errors grow rapidly. On 11 May ash is located close to a bifurcation point in the atmosphere, resulting in a rapid divergence in the modeled and satellite ash locations. In general, the model skill reduces as the residence time of ash increases. However, the error growth is not always steady. Rapid increases in error growth are linked to key points in the ash trajectories. Ensemble modeling using perturbed meteorological data would help to represent this uncertainty, and assimilation of satellite ash data would help to reduce uncertainty in volcanic ash forecasts.
Resumo:
Atmosphere only and ocean only variational data assimilation (DA) schemes are able to use window lengths that are optimal for the error growth rate, non-linearity and observation density of the respective systems. Typical window lengths are 6-12 hours for the atmosphere and 2-10 days for the ocean. However, in the implementation of coupled DA schemes it has been necessary to match the window length of the ocean to that of the atmosphere, which may potentially sacrifice the accuracy of the ocean analysis in order to provide a more balanced coupled state. This paper investigates how extending the window length in the presence of model error affects both the analysis of the coupled state and the initialized forecast when using coupled DA with differing degrees of coupling. Results are illustrated using an idealized single column model of the coupled atmosphere-ocean system. It is found that the analysis error from an uncoupled DA scheme can be smaller than that from a coupled analysis at the initial time, due to faster error growth in the coupled system. However, this does not necessarily lead to a more accurate forecast due to imbalances in the coupled state. Instead coupled DA is more able to update the initial state to reduce the impact of the model error on the accuracy of the forecast. The effect of model error is potentially most detrimental in the weakly coupled formulation due to the inconsistency between the coupled model used in the outer loop and uncoupled models used in the inner loop.