108 resultados para ”real world mathematics”
Resumo:
The translation of an ensemble of model runs into a probability distribution is a common task in model-based prediction. Common methods for such ensemble interpretations proceed as if verification and ensemble were draws from the same underlying distribution, an assumption not viable for most, if any, real world ensembles. An alternative is to consider an ensemble as merely a source of information rather than the possible scenarios of reality. This approach, which looks for maps between ensembles and probabilistic distributions, is investigated and extended. Common methods are revisited, and an improvement to standard kernel dressing, called ‘affine kernel dressing’ (AKD), is introduced. AKD assumes an affine mapping between ensemble and verification, typically not acting on individual ensemble members but on the entire ensemble as a whole, the parameters of this mapping are determined in parallel with the other dressing parameters, including a weight assigned to the unconditioned (climatological) distribution. These amendments to standard kernel dressing, albeit simple, can improve performance significantly and are shown to be appropriate for both overdispersive and underdispersive ensembles, unlike standard kernel dressing which exacerbates over dispersion. Studies are presented using operational numerical weather predictions for two locations and data from the Lorenz63 system, demonstrating both effectiveness given operational constraints and statistical significance given a large sample.
Resumo:
The current state of the art and direction of research in computer vision aimed at automating the analysis of CCTV images is presented. This includes low level identification of objects within the field of view of cameras, following those objects over time and between cameras, and the interpretation of those objects’ appearance and movements with respect to models of behaviour (and therefore intentions inferred). The potential ethical problems (and some potential opportunities) such developments may pose if and when deployed in the real world are presented, and suggestions made as to the necessary new regulations which will be needed if such systems are not to further enhance the power of the surveillers against the surveilled.
Resumo:
The objective of this book is to present the quantitative techniques that are commonly employed in empirical finance research together with real world, state of the art research examples. Each chapter is written by international experts in their fields. The unique approach is to describe a question or issue in finance and then to demonstrate the methodologies that may be used to solve it. All of the techniques described are used to address real problems rather than being presented for their own sake and the areas of application have been carefully selected so that a broad range of methodological approaches can be covered. This book is aimed primarily at doctoral researchers and academics who are engaged in conducting original empirical research in finance. In addition, the book will be useful to researchers in the financial markets and also advanced Masters-level students who are writing dissertations.
Resumo:
Global communicationrequirements andloadimbalanceof someparalleldataminingalgorithms arethe major obstacles to exploitthe computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication costin parallel data mining algorithms and, in particular, in the k-means algorithm for cluster analysis. In the straightforward parallel formulation of the k-means algorithm, data and computation loads are uniformly distributed over the processing nodes. This approach has excellent load balancing characteristics that may suggest it could scale up to large and extreme-scale parallel computing systems. However, at each iteration step the algorithm requires a global reduction operationwhichhinders thescalabilityoftheapproach.Thisworkstudiesadifferentparallelformulation of the algorithm where the requirement of global communication is removed, while maintaining the same deterministic nature ofthe centralised algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real-world distributed applications or can be induced by means ofmulti-dimensional binary searchtrees. The approachcanalso be extended to accommodate an approximation error which allows a further reduction ofthe communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing element
Resumo:
In response to a substantial weakening of the Atlantic Meridional Overturning Circulation (AMOC)— from a coupled ocean–atmosphere general circulation model experiment—significant changes in the interannual variability are found over the tropical Atlantic, characterized by an increase of variance (by ~150 %) in boreal late spring-early summer and a decrease of variance (by ~60 %) in boreal autumn. This study focuses on understanding physical mechanisms responsible for these changes in interannual variability in the tropical Atlantic. It demonstrates that the increase of variability in spring is a consequence of an increase in the variance of the El Niño-Southern Oscillation, which has a large impact on the tropical Atlantic via anomalous surface heat fluxes. Winter El Niño (La Niña) affects the eastern equatorial Atlantic by decreasing (increasing) cloud cover and surface wind speed which is associated with anomalous downward (upward) short wave radiation and reduced (enhanced) upward latent heat fluxes, creating anomalous positive (negative) sea surface temperature (SST) anomalies over the region from winter to spring. On the other hand, the decrease of SST variance in autumn is due to a deeper mean thermocline which weakens the impact of the thermocline movement on SST variation. The comparison between the model results and observations is not straightforward owing to the influence of model biases and the lack of a major MOC weakening event in the instrumental record. However, it is argued that the basic physical mechanisms found in the model simulations are likely to be robust and therefore have relevance to understanding tropical Atlantic variability in the real world, perhaps with modified seasonality.
Resumo:
Based on the availability of hemispheric gridded data sets from observations, analysis and global climate models, objective cyclone identification methods were developed and applied to these data sets. Due to the large amount of investigation methods combined with the variety of different datasets, a multitude of results exist, not only for the recent climate period but also for the next century, assuming anthropogenic changed conditions. Different thresholds, different physical quantities, and considerations of different atmospheric vertical levels add to a picture that is difficult to combine into a common view of cyclones, their variability and trends, in the real world and in GCM studies. Thus, this paper will give a comprehensive review of the actual knowledge on climatologies of mid-latitude cyclones for the Northern and Southern Hemisphere for the present climate and for its possible changes under anthropogenic climate conditions.
Resumo:
We compare future changes in global mean temperature in response to different future scenarios which, for the first time, arise from emission-driven rather than concentration-driven perturbed parameter ensemble of a global climate model (GCM). These new GCM simulations sample uncertainties in atmospheric feedbacks, land carbon cycle, ocean physics and aerosol sulphur cycle processes. We find broader ranges of projected temperature responses arising when considering emission rather than concentration-driven simulations (with 10–90th percentile ranges of 1.7 K for the aggressive mitigation scenario, up to 3.9 K for the high-end, business as usual scenario). A small minority of simulations resulting from combinations of strong atmospheric feedbacks and carbon cycle responses show temperature increases in excess of 9 K (RCP8.5) and even under aggressive mitigation (RCP2.6) temperatures in excess of 4 K. While the simulations point to much larger temperature ranges for emission-driven experiments, they do not change existing expectations (based on previous concentration-driven experiments) on the timescales over which different sources of uncertainty are important. The new simulations sample a range of future atmospheric concentrations for each emission scenario. Both in the case of SRES A1B and the Representative Concentration Pathways (RCPs), the concentration scenarios used to drive GCM ensembles, lies towards the lower end of our simulated distribution. This design decision (a legacy of previous assessments) is likely to lead concentration-driven experiments to under-sample strong feedback responses in future projections. Our ensemble of emission-driven simulations span the global temperature response of the CMIP5 emission-driven simulations, except at the low end. Combinations of low climate sensitivity and low carbon cycle feedbacks lead to a number of CMIP5 responses to lie below our ensemble range. The ensemble simulates a number of high-end responses which lie above the CMIP5 carbon cycle range. These high-end simulations can be linked to sampling a number of stronger carbon cycle feedbacks and to sampling climate sensitivities above 4.5 K. This latter aspect highlights the priority in identifying real-world climate-sensitivity constraints which, if achieved, would lead to reductions on the upper bound of projected global mean temperature change. The ensembles of simulations presented here provides a framework to explore relationships between present-day observables and future changes, while the large spread of future-projected changes highlights the ongoing need for such work.
Resumo:
The reversibility of the Atlantic meridional overturning circulation (AMOC) is investigated in multi-model experiments using global climate models (GCMs) where CO2 concentrations are increased by 1 or 2 % per annum to 2× or 4× preindustrial conditions. After a period of stabilisation the CO2 is decreased back to preindustrial conditions. In most experiments when the CO2 decreases, the AMOC recovers before becoming anomalously strong. This "overshoot" is up to an extra 18.2Sv or 104 % of its preindustrial strength, and the period with an anomalously strong AMOC can last for several hundred years. The magnitude of this overshoot is shown to be related to the build up of salinity in the subtropical Atlantic during the previous period of high CO2 levels. The magnitude of this build up is partly related to anthropogenic changes in the hydrological cycle. The mechanisms linking the subtropical salinity increase to the subsequent overshoot are analysed, supporting the relationship found. This understanding is used to explain differences seen in some models and scenarios. In one experiment there is no overshoot because there is little salinity build up, partly as a result of model differences in the hydrological cycle response to increased CO2 levels and partly because of a less aggressive scenario. Another experiment has a delayed overshoot, possibly as a result of a very weak AMOC in that GCM when CO2 is high. This study identifies aspects of overshoot behaviour that are robust across a multi-model and multi-scenario ensemble, and those that differ between experiments. These results could inform an assessment of the real-world AMOC response to decreasing CO2.
Resumo:
Although Theory of International Politics is a standard-bearer for explanatory theory in international relations (IR), Waltz’s methodology has been subject to numerous quite disparate analyses. One reason why it has proved hard to pin down is that too little attention has been paid to how, in practice, Waltz approaches real-world problems. Despite his neopositivist rhetoric, Waltz applies neorealism in a notably loose, even indeterminate, fashion. There is therefore a disjunction between what he says and what he does. This is partly explained by his unsatisfactory attempt to reconcile his avowed neopositivism with his belief that international politics is characterized by organized complexity. The inconsistencies thus created also help to make sense of why competing interpretations of his methodology have emerged. Some aspects of his work do point beyond these particular methodological travails in ways that will continue to be of interest to IR theorists, but its most enduring methodological lesson may be that rhetoric and practice do not necessarily fit harmoniously together.
Resumo:
Lord Kelvin (William Thomson) made important contributions to the study of atmospheric elec- tricity during a brief but productive period from 1859–1861. By 1859 Kelvin had recognised the need for “incessant recording” of atmospheric electrical parameters, and responded by inventing both the water dropper equaliser for measuring the atmospheric potential gradient (PG), and photographic data logging. The water dropper equaliser was widely adopted internationally and is still in use today. Following theoretical consid- erations of electric field distortion by local topography, Kelvin developed a portable electrometer, using it to investigate the PG on the Scottish island of Arran. During these environmental measurements, Kelvin may have unwittingly detected atmospheric PG changes during solar activity in August / September 1859 associated with the “Carrington event”, which is interesting in the context of his later statements that solar magnetic influ- ence on the Earth was impossible. Kelvin’s atmospheric electricity work presents an early representative study in quantitative environmental physics, through the application of mathematical principles to an environmental problem, the design and construction of bespoke instrumentation for real world measurements and recognising the limitations of the original theoretical view revealed by experimental work
Resumo:
Students in the architecture, engineering, and construction disciplines are often challenged with visualizing and understanding the complex spatial and temporal relationships involved in designing and constructing three-dimensional (3D) structures. An evolving body of research traces the use of educational computer simulations to enhance student learning experiences through testing real-world scenarios and the development of student decision-making skills. Ongoing research at Pennsylvania State University aims to improve engineering education in construction through interactive construction project learning applications in an immersive virtual reality environment. This paper describes the first- and second-generation development of the Virtual Construction Simulator (VCS), a tool that enables students to simultaneously create and review construction schedules through 3D model interaction. The educational value and utility of VCS was assessed through surveys, focus group interviews, and a student exercise conducted in a construction management class. Results revealed VCS is a valuable and effective four-dimensional (4D) model creation and schedule review application that fosters collaborative work and greater student task focus. This paper concludes with a discussion of the findings and the future development steps of the VCS educational simulation
Resumo:
Forgetting immediate physical reality and having awareness of one�s location in the simulated world is critical to enjoyment and performance in virtual environments be it an interactive 3D game such as Quake or an online virtual 3d community space such as Second Life. Answer to the question "where am I?" at two levels, whether the locus is in the immediate real world as opposed to the virtual world and whether one is aware of the spatial co-ordinates of that locus, hold the key to any virtual 3D experience. While 3D environments, especially virtual environments and their impact on spatial comprehension has been studied in disciplines such as architecture, it is difficult to determine the relative contributions of specific attributes such as screen size or stereoscopy towards spatial comprehension since most of them treat the technology as monolith (box-centered). Using a variable-centered approach put forth by Nass and Mason (1990) which breaks down the technology into its component variables and their corresponding values as its theoretical basis, this paper looks at the contributions of five variables (Stereoscopy, screen size, field of view, level of realism and level of detail) common to most virtual environments on spatial comprehension and presence. The variable centered approach can be daunting as the increase in the number of variables can exponentially increase the number of conditions and resources required. We overcome this drawback posed by adoption of such a theoretical approach by the use of a fractional factorial design for the experiment. This study has completed the first wave of data collection and starting the next phase in January 2007 and expected to complete by February 2007. Theoretical and practical implications of the study are discussed.
Resumo:
We propose the Tetra Pak case as a real-world example to study the implications of multiproduct activity for European Competition Policy. Tetra Pak, a monopolist in aseptic carton packaging of liquid food, competes with Elopak in the nonaseptic sector. The EC Commission used the effect of Tetra Pak's dominance in the aseptic sector on its rival's performance as an evidence of the former's anticompetitive behavior. With linear demand and cost functions and interdependent demands, the Commission's position can be supported. However, a more general model suggests that the Commission's conclusions cannot be supported as the unique outcome of the analysis of the information available.
Resumo:
An efficient market incorporates news into prices immediately and fully. Tests for efficiency in financial markets have been undermined by information leakage. We test for efficiency in sports betting markets – real-world markets where news breaks remarkably cleanly. Applying a novel identification to high-frequency data, we investigate the reaction of prices to goals scored on the ‘cusp’ of half-time. This strategy allows us to separate the market's response to major news (a goal), from its reaction to the continual flow of minor game-time news. On our evidence, prices update swiftly and fully.
Resumo:
Many urban surface energy balance models now exist. These vary in complexity from simple schemes that represent the city as a concrete slab, to those which incorporate detailed representations of momentum and energy fluxes distributed within the atmospheric boundary layer. While many of these schemes have been evaluated against observations, with some models even compared with the same data sets, such evaluations have not been undertaken in a controlled manner to enable direct comparison. For other types of climate model, for instance the Project for Intercomparison of Land-Surface Parameterization Schemes (PILPS) experiments (Henderson-Sellers et al., 1993), such controlled comparisons have been shown to provide important insights into both the mechanics of the models and the physics of the real world. This paper describes the progress that has been made to date on a systematic and controlled comparison of urban surface schemes. The models to be considered, and their key attributes, are described, along with the methodology to be used for the evaluation.