973 resultados para Iterative Closest Point (ICP) Algorithm
Resumo:
Audit report on the City of Center Point, Iowa for the year ended June 30, 2007
Resumo:
Audit report on the City of Strawberry Point, Iowa for the year ended June 30, 2007
Resumo:
Calceology is the study of recovered archaeological leather footwear and is comprised of conservation, documentation and identification of leather shoe components and shoe styles. Recovered leather shoes are complex artefacts that present technical, stylistic and personal information about the culture and people that used them. The current method in calceological research for typology and chronology is by comparison with parallel examples, though its use poses problems by an absence of basic definitions and the lack of a taxonomic hierarchy. The research findings of the primary cutting patterns, used for making all leather footwear, are integrated with the named style method and the Goubitz notation, resulting in a combined methodology as a basis for typological organisation for recovered footwear and a chronology for named shoe styles. The history of calceological research is examined in chapter two and is accompanied by a review of methodological problems as seen in the literature. Through the examination of various documentation and research techniques used during the history of calceological studies, the reasons why a standard typology and methodology failed to develop are investigated. The variety and continual invention of a new research method for each publication of a recovered leather assemblage hindered the development of a single standard methodology. Chapter three covers the initial research with the database through which the primary cutting patterns were identified and the named styles were defined. The chronological span of each named style was established through iterative cross-site sedation and named style comparisons. The technical interpretation of the primary cutting patterns' consistent use is due to constraints imposed by the leather and the forms needed to cover the foot. Basic parts of the shoe patterns and the foot are defined, plus terms provided for identifying the key points for pattern making. Chapter four presents the seventeen primary cutting patterns and their sub-types, these are divided into three main groups: six integral soled patterns, four hybrid soled patterns and seven separately soled patterns. Descriptions of the letter codes, pattern layout, construction principle, closing seam placement and list of sub-types are included in the descriptions of each primary cutting pattern. The named shoe styles and their relative chronology are presented in chapter five. Nomenclature for the named styles is based on the find location of the first published example plus the primary cutting pattern code letter. The named styles are presented in chronological order from Prehistory through to the late 16th century. Short descriptions of the named styles are given and illustrated with examples of recovered archaeological leather footwear, reconstructions of archaeological shoes and iconographical sources. Chapter six presents documentation of recovered archaeological leather using the Goubitz notation, an inventory and description of style elements and fastening methods used for defining named shoe styles, technical information about sole/upper constructions and the consequences created by the use of lasts and sewing forms for style identification and fastening placement in relation to the instep point. The chapter concludes with further technical information about the implications for researchers about shoemaking, pattern making and reconstructive archaeology. The conclusion restates the original research question of why a group of primary cutting patterns appear to have been used consistently throughout the European archaeological record. The quantitative and qualitative results from the database show the use of these patterns but it is the properties of the leather that imposes the use of the primary cutting patterns. The combined methodology of primary pattern identification, named style and artefact registration provides a framework for calceological research.
Resumo:
Special investigation of the City of Center Point Library for the period January 1, 2006 through December 6, 2007
Resumo:
The generalization of simple correspondence analysis, for two categorical variables, to multiple correspondence analysis where they may be three or more variables, is not straighforward, both from a mathematical and computational point of view. In this paper we detail the exact computational steps involved in performing a multiple correspondence analysis, including the special aspects of adjusting the principal inertias to correct the percentages of inertia, supplementary points and subset analysis. Furthermore, we give the algorithm for joint correspondence analysis where the cross-tabulations of all unique pairs of variables are analysed jointly. The code in the R language for every step of the computations is given, as well as the results of each computation.
Resumo:
In this paper we propose a Pyramidal Classification Algorithm,which together with an appropriate aggregation index producesan indexed pseudo-hierarchy (in the strict sense) withoutinversions nor crossings. The computer implementation of thealgorithm makes it possible to carry out some simulation testsby Monte Carlo methods in order to study the efficiency andsensitivity of the pyramidal methods of the Maximum, Minimumand UPGMA. The results shown in this paper may help to choosebetween the three classification methods proposed, in order toobtain the classification that best fits the original structureof the population, provided we have an a priori informationconcerning this structure.
Resumo:
We provide methods for forecasting variables and predicting turning points in panel Bayesian VARs. We specify a flexible model which accounts for both interdependencies in the cross section and time variations in the parameters. Posterior distributions for the parameters are obtained for a particular type of diffuse, for Minnesota-type and for hierarchical priors. Formulas for multistep, multiunit point and average forecasts are provided. An application to the problem of forecasting the growth rate of output and of predicting turning points in the G-7 illustrates the approach. A comparison with alternative forecasting methods is also provided.
Resumo:
Revenue management (RM) is a complicated business process that can best be described ascontrol of sales (using prices, restrictions, or capacity), usually using software as a tool to aiddecisions. RM software can play a mere informative role, supplying analysts with formatted andsummarized data who use it to make control decisions (setting a price or allocating capacity fora price point), or, play a deeper role, automating the decisions process completely, at the otherextreme. The RM models and algorithms in the academic literature by and large concentrateon the latter, completely automated, level of functionality.A firm considering using a new RM model or RM system needs to evaluate its performance.Academic papers justify the performance of their models using simulations, where customerbooking requests are simulated according to some process and model, and the revenue perfor-mance of the algorithm compared to an alternate set of algorithms. Such simulations, whilean accepted part of the academic literature, and indeed providing research insight, often lackcredibility with management. Even methodologically, they are usually awed, as the simula-tions only test \within-model" performance, and say nothing as to the appropriateness of themodel in the first place. Even simulations that test against alternate models or competition arelimited by their inherent necessity on fixing some model as the universe for their testing. Theseproblems are exacerbated with RM models that attempt to model customer purchase behav-ior or competition, as the right models for competitive actions or customer purchases remainsomewhat of a mystery, or at least with no consensus on their validity.How then to validate a model? Putting it another way, we want to show that a particularmodel or algorithm is the cause of a certain improvement to the RM process compared to theexisting process. We take care to emphasize that we want to prove the said model as the causeof performance, and to compare against a (incumbent) process rather than against an alternatemodel.In this paper we describe a \live" testing experiment that we conducted at Iberia Airlineson a set of flights. A set of competing algorithms control a set of flights during adjacentweeks, and their behavior and results are observed over a relatively long period of time (9months). In parallel, a group of control flights were managed using the traditional mix of manualand algorithmic control (incumbent system). Such \sandbox" testing, while common at manylarge internet search and e-commerce companies is relatively rare in the revenue managementarea. Sandbox testing has an undisputable model of customer behavior but the experimentaldesign and analysis of results is less clear. In this paper we describe the philosophy behind theexperiment, the organizational challenges, the design and setup of the experiment, and outlinethe analysis of the results. This paper is a complement to a (more technical) related paper thatdescribes the econometrics and statistical analysis of the results.
Resumo:
In models where privately informed agents interact, agents may need to formhigher order expectations, i.e. expectations of other agents' expectations. This paper develops a tractable framework for solving and analyzing linear dynamic rational expectationsmodels in which privately informed agents form higher order expectations. The frameworkis used to demonstrate that the well-known problem of the infinite regress of expectationsidentified by Townsend (1983) can be approximated to an arbitrary accuracy with a finitedimensional representation under quite general conditions. The paper is constructive andpresents a fixed point algorithm for finding an accurate solution and provides weak conditions that ensure that a fixed point exists. To help intuition, Singleton's (1987) asset pricingmodel with disparately informed traders is used as a vehicle for the paper.
Resumo:
We present a simple randomized procedure for the prediction of a binary sequence. The algorithm uses ideas from recent developments of the theory of the prediction of individual sequences. We show that if thesequence is a realization of a stationary and ergodic random process then the average number of mistakes converges, almost surely, to that of the optimum, given by the Bayes predictor.
Resumo:
We investigate on-line prediction of individual sequences. Given a class of predictors, the goal is to predict as well as the best predictor in the class, where the loss is measured by the self information (logarithmic) loss function. The excess loss (regret) is closely related to the redundancy of the associated lossless universal code. Using Shtarkov's theorem and tools from empirical process theory, we prove a general upper bound on the best possible (minimax) regret. The bound depends on certain metric properties of the class of predictors. We apply the bound to both parametric and nonparametric classes ofpredictors. Finally, we point out a suboptimal behavior of the popular Bayesian weighted average algorithm.