861 resultados para paraconsistent model theory
Resumo:
We describe a method for evaluating an ensemble of predictive models given a sample of observations comprising the model predictions and the outcome event measured with error. Our formulation allows us to simultaneously estimate measurement error parameters, true outcome — aka the gold standard — and a relative weighting of the predictive scores. We describe conditions necessary to estimate the gold standard and for these estimates to be calibrated and detail how our approach is related to, but distinct from, standard model combination techniques. We apply our approach to data from a study to evaluate a collection of BRCA1/BRCA2 gene mutation prediction scores. In this example, genotype is measured with error by one or more genetic assays. We estimate true genotype for each individual in the dataset, operating characteristics of the commonly used genotyping procedures and a relative weighting of the scores. Finally, we compare the scores against the gold standard genotype and find that Mendelian scores are, on average, the more refined and better calibrated of those considered and that the comparison is sensitive to measurement error in the gold standard.
Resumo:
The purpose of this research was to develop a working physical model of the focused plenoptic camera and develop software that can process the measured image intensity, reconstruct this into a full resolution image, and to develop a depth map from its corresponding rendered image. The plenoptic camera is a specialized imaging system designed to acquire spatial, angular, and depth information in a single intensity measurement. This camera can also computationally refocus an image by adjusting the patch size used to reconstruct the image. The published methods have been vague and conflicting, so the motivation behind this research is to decipher the work that has been done in order to develop a working proof-of-concept model. This thesis outlines the theory behind the plenoptic camera operation and shows how the measured intensity from the image sensor can be turned into a full resolution rendered image with its corresponding depth map. The depth map can be created by a cross-correlation of adjacent sub-images created by the microlenslet array (MLA.) The full resolution image reconstruction can be done by taking a patch from each MLA sub-image and piecing them together like a puzzle. The patch size determines what object plane will be in-focus. This thesis also goes through a very rigorous explanation of the design constraints involved with building a plenoptic camera. Plenoptic camera data from Adobe © was used to help with the development of the algorithms written to create a rendered image and its depth map. Finally, using the algorithms developed from these tests and the knowledge for developing the plenoptic camera, a working experimental system was built, which successfully generated a rendered image and its corresponding depth map.
Resumo:
Wind energy has been one of the most growing sectors of the nation’s renewable energy portfolio for the past decade, and the same tendency is being projected for the upcoming years given the aggressive governmental policies for the reduction of fossil fuel dependency. Great technological expectation and outstanding commercial penetration has shown the so called Horizontal Axis Wind Turbines (HAWT) technologies. Given its great acceptance, size evolution of wind turbines over time has increased exponentially. However, safety and economical concerns have emerged as a result of the newly design tendencies for massive scale wind turbine structures presenting high slenderness ratios and complex shapes, typically located in remote areas (e.g. offshore wind farms). In this regard, safety operation requires not only having first-hand information regarding actual structural dynamic conditions under aerodynamic action, but also a deep understanding of the environmental factors in which these multibody rotating structures operate. Given the cyclo-stochastic patterns of the wind loading exerting pressure on a HAWT, a probabilistic framework is appropriate to characterize the risk of failure in terms of resistance and serviceability conditions, at any given time. Furthermore, sources of uncertainty such as material imperfections, buffeting and flutter, aeroelastic damping, gyroscopic effects, turbulence, among others, have pleaded for the use of a more sophisticated mathematical framework that could properly handle all these sources of indetermination. The attainable modeling complexity that arises as a result of these characterizations demands a data-driven experimental validation methodology to calibrate and corroborate the model. For this aim, System Identification (SI) techniques offer a spectrum of well-established numerical methods appropriated for stationary, deterministic, and data-driven numerical schemes, capable of predicting actual dynamic states (eigenrealizations) of traditional time-invariant dynamic systems. As a consequence, it is proposed a modified data-driven SI metric based on the so called Subspace Realization Theory, now adapted for stochastic non-stationary and timevarying systems, as is the case of HAWT’s complex aerodynamics. Simultaneously, this investigation explores the characterization of the turbine loading and response envelopes for critical failure modes of the structural components the wind turbine is made of. In the long run, both aerodynamic framework (theoretical model) and system identification (experimental model) will be merged in a numerical engine formulated as a search algorithm for model updating, also known as Adaptive Simulated Annealing (ASA) process. This iterative engine is based on a set of function minimizations computed by a metric called Modal Assurance Criterion (MAC). In summary, the Thesis is composed of four major parts: (1) development of an analytical aerodynamic framework that predicts interacted wind-structure stochastic loads on wind turbine components; (2) development of a novel tapered-swept-corved Spinning Finite Element (SFE) that includes dampedgyroscopic effects and axial-flexural-torsional coupling; (3) a novel data-driven structural health monitoring (SHM) algorithm via stochastic subspace identification methods; and (4) a numerical search (optimization) engine based on ASA and MAC capable of updating the SFE aerodynamic model.
Evolutionary demography of long-lived monocarpic perennials: a time-lagged integral projection model
Resumo:
1. The evolution of flowering strategies (when and at what size to flower) in monocarpic perennials is determined by balancing current reproduction with expected future reproduction, and these are largely determined by size-specific patterns of growth and survival. However, because of the difficulty in following long-lived individuals throughout their lives, this theory has largely been tested using short-lived species (< 5 years). 2. Here, we tested this theory using the long-lived monocarpic perennial Campanula thyrsoides which can live up to 16 years. We used a novel approach that combined permanent plot and herb chronology data from a 3-year field study to parameterize and validate integral projection models (IPMs). 3. Similar to other monocarpic species, the rosette leaves of C. thyrsoides wither over winter and so size cannot be measured in the year of flowering. We therefore extended the existing IPM framework to incorporate an additional time delay that arises because flowering demography must be predicted from rosette size in the year before flowering. 4. We found that all main demographic functions (growth, survival probability, flowering probability and fecundity) were strongly size-dependent and there was a pronounced threshold size of flowering. There was good agreement between the predicted distribution of flowering ages obtained from the IPMs and that estimated in the field. Mostly, there was good agreement between the IPM predictions and the direct quantitative field measurements regarding the demographic parameters lambda, R-0 and T. We therefore conclude that the model captures the main demographic features of the field populations. 5. Elasticity analysis indicated that changes in the survival and growth function had the largest effect (c. 80%) on lambda and this was considerably larger than in short-lived monocarps. We found only weak selection pressure operating on the observed flowering strategy which was close to the predicted evolutionary stable strategy. 6. Synthesis. The extended IPM accurately described the demography of a long-lived monocarpic perennial using data collected over a relatively short period. We could show that the evolution of flowering strategies in short- and long-lived monocarps seem to follow the same general rules but with a longevity-related emphasis on survival over fecundity.
Resumo:
Many methodologies dealing with prediction or simulation of soft tissue deformations on medical image data require preprocessing of the data in order to produce a different shape representation that complies with standard methodologies, such as mass–spring networks, finite element method s (FEM). On the other hand, methodologies working directly on the image space normally do not take into account mechanical behavior of tissues and tend to lack physics foundations driving soft tissue deformations. This chapter presents a method to simulate soft tissue deformations based on coupled concepts from image analysis and mechanics theory. The proposed methodology is based on a robust stochastic approach that takes into account material properties retrieved directly from the image, concepts from continuum mechanics and FEM. The optimization framework is solved within a hierarchical Markov random field (HMRF) which is implemented on the graphics processor unit (GPU See Graphics processing unit ).
Resumo:
One of the most influential statements in the anomie theory tradition has been Merton’s argument that the volume of instrumental property crime should be higher where there is a greater imbalance between the degree of commitment to monetary success goals and the degree of commitment to legitimate means of pursing such goals. Contemporary anomie theories stimulated by Merton’s perspective, most notably Messner and Rosenfeld’s institutional anomie theory, have expanded the scope conditions by emphasizing lethal criminal violence as an outcome to which anomie theory is highly relevant, and virtually all contemporary empirical studies have focused on applying the perspective to explaining spatial variation in homicide rates. In the present paper, we argue that current explications of Merton’s theory and IAT have not adequately conveyed the relevance of the core features of the anomie perspective to lethal violence. We propose an expanded anomie model in which an unbalanced pecuniary value system – the core causal variable in Merton’s theory and IAT – translates into higher levels of homicide primarily in indirect ways by increasing levels of firearm prevalence, drug market activity, and property crime, and by enhancing the degree to which these factors stimulate lethal outcomes. Using aggregate-level data collected during the mid-to-late 1970s for a sample of relatively large social aggregates within the U.S., we find a significant effect on homicide rates of an interaction term reflecting high levels of commitment to monetary success goals and low levels of commitment to legitimate means. Virtually all of this effect is accounted for by higher levels of property crime and drug market activity that occur in areas with an unbalanced pecuniary value system. Our analysis also reveals that property crime is more apt to lead to homicide under conditions of high levels of structural disadvantage. These and other findings underscore the potential value of elaborating the anomie perspective to explicitly account for lethal violence.
Resumo:
A limited but accumulating body of research and theoretical commentary offers support for core claims of the “institutional-anomie theory” of crime (IAT) and points to areas needing further development. In this paper, which focuses on violent crime, we clarify the concept of social institutions, elaborate the cultural component of IAT, derive implications for individual behavior, summarize empirical applications, and propose directions for future research. Drawing on Talcott Parsons, we distinguish the “subjective” and “objective” dimensions of institutional dynamics and discuss their interrelationship. We elaborate on the theory’s cultural component with reference to Durkheim’s distinction between “moral” and “egoistic” individualism and propose that a version of the egoistic type characterizes societies in which the economy dominates the institutional structure, anomie is rampant, and levels of violent crime are high. We also offer a heuristic model of IAT that integrates macro- and individual levels of analysis. Finally, we discuss briefly issues for the further theoretical elaboration of this macro-social perspective on violent crime. Specifically, we call attention to the important tasks of explaining the emergence of economic dominance in the institutional balance of power and of formulating an institutional account for distinctive punishment practices, such as the advent of mass incarceration in the United States.
Resumo:
A model of theoretical science is set forth to guide the formulation of general theories around abstract concepts and processes. Such theories permit explanatory application to many phenomena that are not ostensibly alike, and in so doing encompass socially disapproved violence, making special theories of violence unnecessary. Though none is completely adequate for the explanatory job, at least seven examples of general theories that help account for deviance make up the contemporary theoretical repertoire. From them, we can identify abstractions built around features of offenses, aspects of individuals, the nature of social relationships, and different social processes. Although further development of general theories may be hampered by potential indeterminacy of the subject matter and by the possibility of human agency, maneuvers to deal with such obstacles are available.
Resumo:
The theory on the intensities of 4f-4f transitions introduced by B.R. Judd and G.S. Ofelt in 1962 has become a center piece in rare-earth optical spectroscopy over the past five decades. Many fundamental studies have since explored the physical origins of the Judd–Ofelt theory and have proposed numerous extensions to the original model. A great number of studies have applied the Judd–Ofelt theory to a wide range of rare-earth doped materials, many of them with important applications in solid-state lasers, optical amplifiers, phosphors for displays and solid state lighting, upconversion and quantum-cutting materials, and fluorescent markers. This paper takes the view of the experimentalist who is interested in appreciating the basic concepts, implications, assumptions, and limitations of the Judd–Ofelt theory in order to properly apply it to practical problems. We first present the formalism for calculating the wavefunctions of 4f electronic states in a concise form and then show their application to the calculation and fitting of 4f-4f transition intensities. The potential, limitations and pitfalls of the theory are discussed, and a detailed case study of LaCl3:Er3+ is presented.
Resumo:
So far, social psychology in sport has preliminary focused on team cohesion, and many studies and meta analyses tried to demonstrate a relation between cohesiveness of a team and it's performance. How a team really co-operates and how the individual actions are integrated towards a team action is a question that has received relatively little attention in research. This may, at least in part, be due to a lack of a theoretical framework for collective actions, a dearth that has only recently begun to challenge sport psychologists. In this presentation a framework for a comprehensive theory of teams in sport is outlined and its potential to integrate the following presentations is put up for discussion. Based on a model developed by von Cranach, Ochsenbein and Valach (1986), teams are information processing organisms, and team actions need to be investigated on two levels: the individual team member and the group as an entity. Elements to be considered are the task, the social structure, the information processing structure and the execution structure. Obviously, different task require different social structures, communication and co-ordination. From a cognitivist point of view, internal representations (or mental models) guide the behaviour mainly in situations requiring quick reactions and adaptations, were deliberate or contingency planning are difficult. In sport teams, the collective representation contains the elements of the team situation, that is team task and team members, and of the team processes, that is communication and co-operation. Different meta-perspectives may be distinguished and bear a potential to explain the actions of efficient teams. Cranach, M. von, Ochsenbein, G., & Valach, L. (1986).The group as a self-active system: Outline of a theory of group action. European Journal of Social Psychology, 16, 193-229.
Resumo:
In the setting of high-dimensional linear models with Gaussian noise, we investigate the possibility of confidence statements connected to model selection. Although there exist numerous procedures for adaptive (point) estimation, the construction of adaptive confidence regions is severely limited (cf. Li in Ann Stat 17:1001–1008, 1989). The present paper sheds new light on this gap. We develop exact and adaptive confidence regions for the best approximating model in terms of risk. One of our constructions is based on a multiscale procedure and a particular coupling argument. Utilizing exponential inequalities for noncentral χ2-distributions, we show that the risk and quadratic loss of all models within our confidence region are uniformly bounded by the minimal risk times a factor close to one.
Resumo:
The (2 + 1)-d U(1) quantum link model is a gauge theory, amenable to quantum simulation, with a spontaneously broken SO(2) symmetry emerging at a quantum phase transition. Its low-energy physics is described by a (2 + 1)-d RP(1) effective field theory, perturbed by an SO(2) breaking operator, which prevents the interpretation of the emergent pseudo-Goldstone boson as a dual photon. At the quantum phase transition, the model mimics some features of deconfined quantum criticality, but remains linearly confining. Deconfinement only sets in at high temperature.