951 resultados para Generalized Derivation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62E16,62F15, 62H12, 62M20.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2010 Mathematics Subject Classification: 47A10.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Exposure to counter-stereotypic gender role models (e.g., a woman engineer) has been shown to successfully reduce the application of biased gender stereotypes. We tested the hypothesis that such efforts may more generally lessen the application of stereotypic knowledge in other (non-gendered) domains. Specifically, based on the notion that counter-stereotypes can stimulate a lesser reliance on heuristic thinking, we predicted that contesting gender stereotypes would eliminate a more general group prototypicality bias in the selection of leaders. Three studies supported this hypothesis. After exposing participants to a counter-stereotypic gender role model, group prototypicality no longer predicted leadership evaluation and selection. We discuss the implications of these findings for groups and organizations seeking to capitalize on the benefits of an increasingly diverse workforce.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Principal component analysis (PCA) is well recognized in dimensionality reduction, and kernel PCA (KPCA) has also been proposed in statistical data analysis. However, KPCA fails to detect the nonlinear structure of data well when outliers exist. To reduce this problem, this paper presents a novel algorithm, named iterative robust KPCA (IRKPCA). IRKPCA works well in dealing with outliers, and can be carried out in an iterative manner, which makes it suitable to process incremental input data. As in the traditional robust PCA (RPCA), a binary field is employed for characterizing the outlier process, and the optimization problem is formulated as maximizing marginal distribution of a Gibbs distribution. In this paper, this optimization problem is solved by stochastic gradient descent techniques. In IRKPCA, the outlier process is in a high-dimensional feature space, and therefore kernel trick is used. IRKPCA can be regarded as a kernelized version of RPCA and a robust form of kernel Hebbian algorithm. Experimental results on synthetic data demonstrate the effectiveness of IRKPCA. © 2010 Taylor & Francis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: The objective of this study was to examine medical illness and anxiety, depressive, and somatic symptoms in older medical patients with generalized anxiety disorder (GAD). METHOD: A case-control study was designed and conducted in the University of California, San Diego (UCSD) Geriatrics Clinics. A total of fifty-four older medical patients with GAD and 54 matched controls participated. MEASUREMENTS: The measurements used for this study include: Brief Symptom Inventory-18, Mini International Neuropsychiatric Interview, and the Anxiety Disorders Interview Schedule. RESULTS: Older medical patients with GAD reported higher levels of somatic symptoms, anxiety, and depression than other older adults, as well as higher rates of diabetes and gastrointestinal conditions. In a multivariate model that included somatic symptoms, medical conditions, and depressive and anxiety symptoms, anxiety symptoms were the only significant predictors of GAD. CONCLUSION: These results suggest first, that older medical patients with GAD do not primarily express distress as somatic symptoms; second, that anxiety symptoms in geriatric patients should not be discounted as a byproduct of medical illness or depression; and third, that older adults with diabetes and gastrointestinal conditions may benefit from screening for anxiety.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A correlation scheme (leading to a special equilibrium called “soft” correlated equilibrium) is applied for two-person finite games in extensive form with perfect information. Randomization by an umpire takes place over the leaves of the game tree. At every decision point players have the choice either to follow the recommendation of the umpire blindly or freely choose any other action except the one suggested. This scheme can lead to Pareto-improved outcomes of other correlated equilibria. Computational issues of maximizing a linear function over the set of soft correlated equilibria are considered and a linear-time algorithm in terms of the number of edges in the game tree is given for a special procedure called “subgame perfect optimization”.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A hagyományos szavazási játékok speciális átruházható hasznosságú, kooperatív játékok, úgynevezett egyszerű játékok, ahol a játékosok a pártok, és az egyes koalíciók értéke 1 vagy 0 attól függően, hogy az adott koalíció elég erős-e az adott jogszabály elfogadásához, vagy sem. Ebben a cikkben bevezetjük az általánosított súlyozott szavazási játékok fogalmát, ahol a pártok mandátumainak száma a valószínűségi változó. Magyar példákon keresztül mutatjuk be az új megközelítés használhatóságát. / === / Voting games are cooperative games with transferable utility, so-called simple games, where the players are parties and the value of a coalition may be 0 or 1 depending on its ability to pass a new law. The authors introduce the concept of generalized weighted voting games where the parties' strengths are random variables. taking examples from Hungary to illustrate the use of this approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The notion of common prior is well-understood and widely-used in the incomplete information games literature. For ordinary type spaces the common prior is de�fined. Pint�er and Udvari (2011) introduce the notion of generalized type space. Generalized type spaces are models for various bonded rationality issues, for �nite belief hierarchies, unawareness among others. In this paper we de�ne the notion of common prior for generalized types spaces. Our results are as follows: the generalization (1) suggests a new form of common prior for ordinary type spaces, (2) shows some quantum game theoretic results (Brandenburger and La Mura, 2011) in new light.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ordinary type spaces (Heifetz and Samet, 1998) are essential ingredients of incomplete information games. With ordinary type spaces one can grab the notions of beliefs, belief hierarchies and common prior etc. However, ordinary type spaces cannot handle the notions of finite belief hierarchy and unawareness among others. In this paper we consider a generalization of ordinary type spaces, and introduce the so called generalized type spaces which can grab all notions ordinary type spaces can and more, finite belief hierarchies and unawareness among others. We also demonstrate that the universal generalized type space exists.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The notion of common prior is well-understood and widely-used in the incomplete information games literature. For ordinary type spaces the common prior is defined. Pinter and Udvari (2011) introduce the notion of generalized type space. Generalized type spaces are models for various bonded rationality issues, for nite belief hierarchies, unawareness among others. In this paper we dene the notion of common prior for generalized types spaces. Our results are as follows: the generalization (1) suggests a new form of common prior for ordinary type spaces, (2) shows some quantum game theoretic results (Brandenburger and La Mura, 2011) in new light.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Az intertemporális döntések fontos szerepet játszanak a közgazdasági modellezésben, és azt írják le, hogy milyen átváltást alkalmazunk két különböző időpont között. A közgazdasági modellezésben az exponenciális diszkontálás a legelterjedtebb, annak ellenére, hogy az empirikus vizsgálatok alapján gyenge a magyarázó ereje. A gazdaságpszichológiában elterjedt általánosított hiperbolikus diszkontálás viszont nagyon nehezen alkalmazható közgazdasági modellezési célra. Így tudott gyorsan elterjedni a kvázi-hiperbolikus diszkontálási modell, amelyik úgy ragadja meg a főbb pszichológiai jelenségeket, hogy kezelhető marad a modellezés során. A cikkben azt állítjuk, hogy hibás az a megközelítés, hogy hosszú távú döntések esetén, főleg sorozatok esetén helyettesíthető a két hiperbolikus diszkontálás egymással. Így a hosszú távú kérdéseknél érdemes felülvizsgálni a kvázi-hiperbolikus diszkontálással kapott eredményeket, ha azok az általánosított hiperbolikus diszkontálási modellel való helyettesíthetőséget feltételezték. ____ Intertemporal choice is one of the crucial questions in economic modeling and it describes decisions which require trade-offs among outcomes occurring in different points in time. In economic modeling the exponential discounting is the most well known, however it has weak validity in empirical studies. Although according to psychologists generalized hyperbolic discounting has the strongest descriptive validity it is very complex and hard to use in economic models. In response to this challenge quasi-hyperbolic discounting was proposed. It has the most important properties of generalized hyperbolic discounting while tractability remains in analytical modeling. Therefore it is common to substitute generalized hyperbolic discounting with quasi-hyperbolic discounting. This paper argues that the substitution of these two models leads to different conclusions in long term decisions especially in the case of series; hence all the models that use quasi-hyperbolic discounting for long term decisions should be revised if they states that generalized hyperbolic discounting model would have the same conclusion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation develops a new figure of merit to measure the similarity (or dissimilarity) of Gaussian distributions through a novel concept that relates the Fisher distance to the percentage of data overlap. The derivations are expanded to provide a generalized mathematical platform for determining an optimal separating boundary of Gaussian distributions in multiple dimensions. Real-world data used for implementation and in carrying out feasibility studies were provided by Beckman-Coulter. It is noted that although the data used is flow cytometric in nature, the mathematics are general in their derivation to include other types of data as long as their statistical behavior approximate Gaussian distributions. ^ Because this new figure of merit is heavily based on the statistical nature of the data, a new filtering technique is introduced to accommodate for the accumulation process involved with histogram data. When data is accumulated into a frequency histogram, the data is inherently smoothed in a linear fashion, since an averaging effect is taking place as the histogram is generated. This new filtering scheme addresses data that is accumulated in the uneven resolution of the channels of the frequency histogram. ^ The qualitative interpretation of flow cytometric data is currently a time consuming and imprecise method for evaluating histogram data. This method offers a broader spectrum of capabilities in the analysis of histograms, since the figure of merit derived in this dissertation integrates within its mathematics both a measure of similarity and the percentage of overlap between the distributions under analysis. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since multimedia data, such as images and videos, are way more expressive and informative than ordinary text-based data, people find it more attractive to communicate and express with them. Additionally, with the rising popularity of social networking tools such as Facebook and Twitter, multimedia information retrieval can no longer be considered a solitary task. Rather, people constantly collaborate with one another while searching and retrieving information. But the very cause of the popularity of multimedia data, the huge and different types of information a single data object can carry, makes their management a challenging task. Multimedia data is commonly represented as multidimensional feature vectors and carry high-level semantic information. These two characteristics make them very different from traditional alpha-numeric data. Thus, to try to manage them with frameworks and rationales designed for primitive alpha-numeric data, will be inefficient. An index structure is the backbone of any database management system. It has been seen that index structures present in existing relational database management frameworks cannot handle multimedia data effectively. Thus, in this dissertation, a generalized multidimensional index structure is proposed which accommodates the atypical multidimensional representation and the semantic information carried by different multimedia data seamlessly from within one single framework. Additionally, the dissertation investigates the evolving relationships among multimedia data in a collaborative environment and how such information can help to customize the design of the proposed index structure, when it is used to manage multimedia data in a shared environment. Extensive experiments were conducted to present the usability and better performance of the proposed framework over current state-of-art approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lake Analyzer is a numerical code coupled with supporting visualization tools for determining indices of mixing and stratification that are critical to the biogeochemical cycles of lakes and reservoirs. Stability indices, including Lake Number, Wedderburn Number, Schmidt Stability, and thermocline depth are calculated according to established literature definitions and returned to the user in a time series format. The program was created for the analysis of high-frequency data collected from instrumented lake buoys, in support of the emerging field of aquatic sensor network science. Available outputs for the Lake Analyzer program are: water temperature (error-checked and/or down-sampled), wind speed (error-checked and/or down-sampled), metalimnion extent (top and bottom), thermocline depth, friction velocity, Lake Number, Wedderburn Number, Schmidt Stability, mode-1 vertical seiche period, and Brunt-Väisälä buoyancy frequency. Secondary outputs for several of these indices delineate the parent thermocline depth (seasonal thermocline) from the shallower secondary or diurnal thermocline. Lake Analyzer provides a program suite and best practices for the comparison of mixing and stratification indices in lakes across gradients of climate, hydro-physiography, and time, and enables a more detailed understanding of the resulting biogeochemical transformations at different spatial and temporal scales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Airborne Light Detection and Ranging (LIDAR) technology has become the primary method to derive high-resolution Digital Terrain Models (DTMs), which are essential for studying Earth's surface processes, such as flooding and landslides. The critical step in generating a DTM is to separate ground and non-ground measurements in a voluminous point LIDAR dataset, using a filter, because the DTM is created by interpolating ground points. As one of widely used filtering methods, the progressive morphological (PM) filter has the advantages of classifying the LIDAR data at the point level, a linear computational complexity, and preserving the geometric shapes of terrain features. The filter works well in an urban setting with a gentle slope and a mixture of vegetation and buildings. However, the PM filter often removes ground measurements incorrectly at the topographic high area, along with large sizes of non-ground objects, because it uses a constant threshold slope, resulting in "cut-off" errors. A novel cluster analysis method was developed in this study and incorporated into the PM filter to prevent the removal of the ground measurements at topographic highs. Furthermore, to obtain the optimal filtering results for an area with undulating terrain, a trend analysis method was developed to adaptively estimate the slope-related thresholds of the PM filter based on changes of topographic slopes and the characteristics of non-terrain objects. The comparison of the PM and generalized adaptive PM (GAPM) filters for selected study areas indicates that the GAPM filter preserves the most "cut-off" points removed incorrectly by the PM filter. The application of the GAPM filter to seven ISPRS benchmark datasets shows that the GAPM filter reduces the filtering error by 20% on average, compared with the method used by the popular commercial software TerraScan. The combination of the cluster method, adaptive trend analysis, and the PM filter allows users without much experience in processing LIDAR data to effectively and efficiently identify ground measurements for the complex terrains in a large LIDAR data set. The GAPM filter is highly automatic and requires little human input. Therefore, it can significantly reduce the effort of manually processing voluminous LIDAR measurements.